“Raw Data Now!”

January 28, 2010

Meaning‘Data’ is not synonymous with ‘meaning’. Although in all the recent fuss about Sir Tim Berners-Lee’s attempt to overturn the UK Civil Service’s ingrained culture of secrecy, this might easily be overlooked.

The announcement of data.gov.uk is to be welcomed, but it is only the first step on a long and complex road. The fears expressed by the data custodians, that data might be interpreted differently from the way intended, just shows how much we are still governed by vested interests who act ‘in our own good’. Sorry, give us the data, and let us make our own interpretations, good or bad.

So, data.gov.uk is a good thing. But it could turn into the veritable Pandora’s Box without some kind of agreed framework within which data are interpreted and evaluated. I am indebted to the KIDMM community for flagging-up the fact that a European focus group has been working on this very problem for some time.

The all-Europe Comité Européen de Normalisation (CEN), is a rather shadowy organisation which seems to work on standards issues in the background, and then suddenly spring into the limelight with a proposal for a new ISO standard. One of their workshops – Discovery of and Access to eGovernment Resources (CEN/ISSS WS/eGov-Share) –  appears to have done precisely this with (I assume) a proposal to the SC34 working group (ISO/IEC JTC1/SC34/WG3). This working group is concerned with producing standard architectures for information management and interchange based on SGML, and their current focus is the Topic Maps standard Topic Maps (ISO/IEC 13250).

Well, you know me. Any mention of Topic Maps and I’m anybody’s. So when I hear of an initiative which has developed a proposal which specifies a protocol for the exchange of information about semantic descriptions which conforms to the Atom Syndication Format and the Topic Maps Data Model, and moreover, which works with semantic descriptions represented in XTM 1.0, XTM 2.0 and RDF/XML, then, well, Nirvana!

Thanks to KIDMM, if you’re interested (and you should be!), then this is where you can find the full specification of the protocol SDShare: Protocol for the Syndication of Semantic Descriptions.

Let us know what you think of it, and of its potential in making sense of the vast amounts of data due to be released on the Web.

Advertisements

Self-Signifying Data

January 13, 2010

** Event rescheduled to March 24, 2010 **

The digitally-supported brain

The Digitally-supported Brain

Those ISKO UK members who attended Dave Snowden’s memorable seminar in April 2009 – Human-machine symbiosis for data interpretation – may remember what I remember as a collective gestalt. ‘Self-signifying data’ was the phrase, I believe, and it proved more memorable for many even than Dave’s ‘Blanket Octopodes’ (see Dave’s slide set for an explanation.

Why raise this again now? Well, because ISKO UK member Jan Wyllie and business partner Simon Eaton, have for some time been developing a web site/application based on those very principles of self-signifying data described by Dave Snowden. Their Open Intelligence initiative draws on years of experience in Content Analysis, brought up-to-date through Web 2.0 technology. And it’s highly relevant to KO professionals, because KO technologies – categorisation and taxonomies – are at the heart of the Open Intelligence approach.

If making systematic inferences from communications flows using faceted taxonomies, using content analysis techniques to turn the tables on the knowledge glut, or increasing the value and productivity of work groups because they will be working with a much higher level of common knowledge rings a bell for you, then consider attending Jan Wyllie’s one-day Ark Group Masterclass Content analysis: Using taxonomies to improve collaboration. It’s to be held on 10 February 2010, in London, and will feature “the first public showing of the all new Open Intelligence software dedicated to making the social networking experience of creating collaborative intelligence, an engaging, as well as a valuable and productive use of a community’s knowledge working time.”

Further details of this important and pioneering effort are available on the Open Intelligence web site, where you can also book your place.


Trying to please everyone

September 18, 2009

One of the enduring attractions of our profession (that’s information management, knowledge management, records management, information science, knowledge organization – whatever you want to call it) for me, is that it impacts upon everything. Yes, literally, everything. When we build a taxonomy, relate descriptors in a thesaurus or assign keywords, we are mediators among a multiplicity of points-of-view, creeds and catechisms. But while that heterogeneity, that multicultural dimension, is often the root of our sense of fulfilment, contention can lie just below the surface.

To focus on one problem in particular, how can we know whether a taxonomy we build is ‘true’ – or perhaps ‘authoritative’? Is there such a thing as ‘universal truth’? Do we all see things the same way? Or, to put it another way, how do we distinguish between – and accommodate – the subjective and the objective?

For instance, when we build a taxonomy, or a navigation scheme for a web site, how can we capture the viewpoint of the majority, whilst also allowing for the individual – even idiosyncratic – point-of-view? Thus do philosophy and politics enter an otherwise cosy world.

It’s a problem addressed recently by Fran Alexander of the Department of Information Studies, University College London, who mounted a highly stimulating poster at ISKO UK’s conference on 22-23 June 2009. The poster provides an interesting first-sight of the complex nexus among business sector objectives, attendant socio-economic-environmental constraints, and the influence exerted by the relative subjectivity/objectivity of the domain.

The degree to which a conceptual framework is held in common, the coherence of interpretation of that framework among its stakeholders, and the terminological system designed to represent it, all depend upon a process of intersubjective creation of shared meaning within a defined socio-cultural context. In other words, politics. Taxonomy is therefore partly political, partly individual and partly pragmatic.

Melville Dewey deserves his place in the history of KO for his balanced accommodation of all three dimensions at the time he devised the DDC. But we’re over 130 years further on now, and the mix of political, personal and practical elements required to reflect current understanding of the world (or organization) has changed immensely. Dewey’s innocent assumptions drawn from the Weltanschauung of his time, appear at least inappropriate, sometimes biased and often incorrect in a 21st century context.

In a rather adept (and certainly persuasive) essay in the latest issue of Knowledge Organization*, Richard Davies asks ‘Should Philosophy Books Be Treated As Fiction?’. He makes the point that, in the terms used here, the intersubjective creation of meaning in the domain of philosophy has barely occurred; rather the opposite in fact, each philosopher seeming bent upon distinguishing his/her approach from predecessors. This occurs, although to a lesser degree, in most other domains as well, amongst them the 15 or so covered by Fran Alexander’s research.

Fran’s conclusion is that “The mediation of subjectivity/objectivity is becoming increasingly relevant in a ‘user-centric’ age.”. So, an awareness of the degree of ‘objectivity’ of a taxonomy project is becoming vital to its functional effectiveness, and this is inevitably governed to some extent by political considerations and the degree to which the role of the taxonomist is perceived to have a political dimension by those who provide the support for such projects.

This is an interesting piece of research and I urge you to take closer look at Fran’s poster, and to allow it to stimulate your own thoughts on the issues involved.

* Davies, Richard. Should Philosophy Books Be Treated As Fiction? Knowledge Organization, 36(2/3), 121-129.


Making Sense of Human-Machine Symbiosis

April 12, 2009
Cynefin Model

Cynefin Model

A NUMBER of people have remarked to me that Dave Snowden’s title for his forthcoming talk to ISKO UK on 23 April 2009 is less than informative. Well, it depends on how well you know his work since he moved on from IBM’s Institute for Knowledge Management and the Cynefin Centre to focus on his own company, Cognitive Edge Pte Ltd.

I’m no expert in Cognitive Edge’s pioneering approach, but maybe I can shed some light on themes he might address in his talk by describing the context within which I apprehend it, and making a few other links along the way.

The processes of organizing and sharing knowledge are complex because people are involved in both the input and the output. However much we try to codify and structure both, there is always that residue of ‘fuzziness’ – un-order – which Checkland in his Soft Systems Methodology described as giving rise to ‘ill-defined’ or ‘soft’ problems.  Although the computer can help us greatly with codification and structure, it has been virtually useless in the face of soft problems – until perhaps the advent of Web 2.0.

As we are increasingly obliged to acknowledge, organizations are comprised of both formal and informal relationships, and it is often the latter which provide the real channels for knowledge and information flow. But how do we tap into these informal networks, and even if we can, how do we make sense of and derive value from what we find? Major shifts and trends (good and bad) often start as ‘weak signals‘, almost undetectable by conventional means. How can we spot these early enough to be able to discourage bad trends and encourage good ones?

Cognitive Edge addresses these questions within an organization by collecting narrative and organizing and analyzing it for meaningful patterns using its open source methods supported by its proprietary software suite SenseMaker. It should be readily apparent that such early intelligence could prove vital to effective decision-making in many situations where the degree of risk is not clear.

Less readily apparent perhaps, is that knowledge organization has a key role to play in this scenario. As UCL alumnus Patrick Lambe says in his excellent book Organising Knowledge: Taxonomies, Knowledge and Organisational Effectiveness:

“Categorisation is, of course, fundamental to the management of risk. Different kinds of risk must be identified and grouped together based on origin, severity or remedy. Risk intelligence systems need to identify the signals or clues that would indicate particular categories of risk and put in place monitoring mechanisms (strategic early warning systems) so that these signals are picked up whenever a risk is emerging (Gilad, 2001).”

Moreover, it does not take a huge leap of the imagination to suggest that if software such as SenseMaker can discern patterns and trends even when weakly detectable, then it could presumably be employed in bridging the gap between formal vocabularies and newly emergent terms and concepts. Such tools are needed to help us move beyond the spurious divide between the formal taxonomic ‘elite’ and the folksonomic lumpenproletariat which is advancing the cause of neither party.

Interesting thought: If software like SenseMaker had been deployed at Lloyds, would they still have gone through with the HBOS takeover?