This is the third in a series of posts on the subject of ‘How the semantic web can crowdsource high-quality judgment and improve policymaking’. In part 2, last week, I described how existing content – the blogosphere, in particular – is currently used, or perhaps abused, by policymakers.
This time, I’m going to cover a range of improvements: how we can make better use of existing content, why we’d want to do so, and I’m going to roughly split these into: (a) technical solutions, and (b) human solutions.
(i) Technology: Aggregation vs. isolation
Political blog aggregators are still very rare, especially in the UK. Creating and maintaining an application that is able to monitor hundreds or thousands of feeds, and produce new, aggregated feeds in a timely fashion, is neither trivial nor cheap. Nonetheless, when I created Bloggers4Labour in early 2005, I showed that usable aggregators were both possible, and – certainly at the time – desirable. By providing the media with a single window onto a wide range of blogging opinion, the blogging oligarchy I mentioned last week could perhaps have been broken.
Only when all blogs are aggregated – on an equal footing, and irrespective of their political affiliation and their nationality – can the blogosphere becomes the comprehensive, fair, and effective knowledge base it needs to be. We don’t want to throw contextual information away, but rather than let it entrench artificial barriers, we should let technology draw its own, more useful inferences.
Thus aggregation should become the norm, rather than the exception – or rather, the least we should expect. Furthermore, bloggers should be encouraged to leave the safety of their partisan networks, and become global political actors.
(ii) Technology: Breaking-down barriers
Rather than being bound by technological limitations and by non-interoperable software tools, and rather than advocating one particular package or way of working, any new crowdsourcing platform should use technology to enable everyone concerned with policy development can participate in a more informed and productive way.
Imagine a knowledge base that not only lets you see related content for any article you read, but that automatically updates you with content as you start to develop a new article. You might discover articles that refuted the argument you just made, that provided you with valuable supportive evidence, or that caused your article to take a different path. Imagine how easy it would be for a policy to have been decided-upon without those crucial points ever having been made, and how expensive and time-consuming a failed policy like that could be.
The old ‘linear’ aggregator model – with its single time-line of unrelated blog posts – is not much help here. Only by bringing all types of expressed opinion together on an equal basis, collapsing the distinctions between the various types, and replacing single time-lines with a web of matched, linked, and related information, can we achieve a really usable knowledge base, that’s easy to visualise and to navigate.
Debategraph-style maps, collaboratively edited documents and Wikis, and aggregated blog content will all be represented in this web. There may well also be a place for Twitter messages and open-source Government data. The overall goal should be to let structured data and mappings bring precision to blog posts, and to let blog posts bring context and detail to structured debates.
(iii) Technology: The Semantic Web
Technical solutions that understand the content they are given will always produce more relevant results than the 99% that don’t. Furthermore, solutions that use sentiment analysis can identify whether a particular individual, or concept, is being talked about in a positive, neutral, or negative light. This opens up the possibility of being able to automatically identify supporting or contradictory evidence for policies mentioned in existing articles, and in new policy documents as they are created. Once again, technology plus existing content can be used to support good policy, strike out bad policy, and save time and effort, not to mention embarrassment.
(iv) Human crowdsourcing: Collaborative editing
Collaborative editing – currently a niche interest – should become the norm, in contrast to the disjointed, sequential model of blog-commenting that is popular today. It is literally vital because it adds value, and adds life to already expressed opinion. The blog post of last year – that was overtaken by events and discredited – can be transformed into the post that acknowledges its original mistakes, assimilates new information, and becomes a valuable addition to the policy debate.
Collaborative editing also accustoms bloggers to a new way of working: by exposing them to scrutiny it encourages more thought and greater responsibility, but at the same time it rewards the extra effort, by giving bloggers – especially new ones, those who are less well-connected, and therefore those who might have the most original ideas – the encouragement that their output is being read and considered by a wider audience than before. While firing off posts into the ether can be cathartic, my experience tells me that bloggers do prefer to be engaged in a greater debate.
In future, contributors will adapt an existing blog post – working within the existing context – and create new branches, or sub-versions, that other contributors can approve and rate, and use as the basis for their own versions. Over time, the most active, the most popular, and the most highly regarded versions will rise to the surface. It may be that these versions will be quite different from one another – after all, while agreement and resolution are fine things, political disagreement can also be valuable, and these versions will be much more useful themselves than the undistilled thoughts of just one blogger.
There is no reason why those used to the current model of blog commenting should not contribute by adding their suggestions at the foot of the original article, rather than working within the framework of the original. Potentially useful insights should not be lost, even if they cannot immediately be related to the existing content. The important thing is that contributors are not limited – or forced to work in a particular way – by technology that dates back to the early days of the Web.
(v) Human crowdsourcing: Juries, assertion-flagging, and data cleanup
There’s a lot more humans can do with a crowdsourcing platform besides creating new content (individually or collectively), flagging, and rating.
The platform can invite – or randomly select – disinterested participants (i.e. who don’t have a personal connection with the issue at hand) to work together on a particular debate, marking up relevant arguments, marking down irrelevant arguments, linking similar ones, and perhaps trying to find resolutions in other areas: essentially doing things that are just too tricky for a computer to do. The Guardian’s recent, and very successful, crowdsourced MP’s expenses exercise is a good example of this. Provide users with an incentive to donate their time and brainpower to the community, and great benefits can be reaped.
Another task humans can perform is to manually tag assertions within articles they read, and ask the platform to contact the original author / blogger so that they can respond with supporting evidence. Those who respond satisfactorily will be given credit for having done so, and their response will be attached to the original article, taking its place in the knowledge base for others to consult.
(vi) Conclusion
I hope I’ve succeeding in setting out a brighter vision of how crowdsourcing can improve policymaking, making it better informed and more efficient; how technology can be used more, and more effectively; how political blogging has a potentially enormous part to play; and how bloggers have a lot to gain by getting involved with a new crowdsourcing platform.
In the next part I’ll talk about how the desire to achieve these things inspired my Poblish project, and how Poblish plans to turn these hopes into reality.