How to manage your company’s online reputation

Over the months I’ve occasionally said one or two negative things about a few companies.

Did I say occasionally? Sorry – I meant constantly.

Their range of responses has been interesting, and piqued the interest of Bullet PR‘s Nicholas O’Flaherty, who used the BNZ series of rants, along with Mauricio’s Slingshot posts as examples, in a speech, of how not to manage online reputation.

Nicholas asked me a few questions for an upcoming article, so I thought I’d cross post the rather long  answer to one of them here:

As a blogger do you have any advice for PR practitioners?

Encourage and facilitate clients and their staff to follow, respond to, and join the online conversation.

Someone needs to continuously scan the internet, and put blog posts that matter into those daily press clippings for the top management team to see. The traditional media no longer has a monopoly on sound opinion and commentary, and that unsound stuff can also trip you up. A basic first step is to set up a simple Google news/blog search with their company name, while a few people should be following the major blogs along with other media.

Secondly, respond as quickly as possible to both positive and negative blog posts from reputable or popular blogs. Respond rationally, honestly and credibly. Ferrit (of all companies) responded to several of my negative posts about their troubled times, and the credible  person responding  was the head of Marketing, Peter Wogan. He joined the conversation, and the result was better for everyone. An exemplary example recently was in Perth where an employee of ISP iiNet responded to my complementary post within an hour or two of the original post – that’s proactive, and they’ll have a loyal customer now.  Can we even imagine  Telecom or Vodaphone  doing this?

Thirdly, join the conversation by blogging, but only do so if you can do it right. Doing it right means the voices are genuine and unrestricted. Genuine voices are those of senior and interesting staff. Unrestricted means that they do the writing themselves with no editing before release. Give guidelines, coach in the right tone for blogging, keep it simple and focused and unleash the talent. The blogoshere’s BS detector is stunningly efficient – so tell the truth and tell it often and well. Xero and Google do this well.

Starting a blog is technically easy, but it does require work on the part of the writer along with continuous reinvention. I encourage individuals and companies to give it a go – experience is the best teacher.

Additionally set client staff free – allow staff to blog about their experiences at their work and about anything that isn’t illegal or unethical to disclose. Trade Me and Fairfax have shown huge tolerance for several bloggers, but we see very little from any of the other major companies online or offline in NZ. Let staff’s negative as well as positive comments come out – and think of it as a continuous survey of staff morale. We do care if staff are unhappy don’t we?

Finally, have fun and be human. Blogging is an engaging medium, and while there are serious and light blogs they all have a conversational and personal tone.  But remember –  the blogosphere is increasingly influential and read, and what goes up there stays up there forever.

Published by Lance Wiggs


12 replies on “How to manage your company’s online reputation”

  1. I’d like to add another point … if you don’t know what you’re talking about, don’t try and bluff your way through. My experience with PR has been pretty poor – where the person clearly doesn’t know or ‘get’ the online world but still insists on spouting nonsense about it. That damages everyone in the long run!


  2. Nicholas, thank you for your comment. Chong Newztel launched a New Zealand focussed blog monitoring service eight weeks ago to compliment our existing traditional media monitoring services. Social media has a growing relevance in the New Zealand media mix. This tool allows clients / PR practitioners to “follow” online conversations and spot emerging debate much earlier.


  3. Putting my Nielsen Online hat on for a moment…

    We have a product/service which is in the UK/US at the moment and we should be launching here sometime next year which measures buzz across all of the CGM space. BuzzMetrics – doesn’t just find mentions of your product but actually does an analysis of the content to see if the buzz is positive or negative and pulls further insights out of the content.

    As Jacqui said BlogPulse is also available which is the free service from BuzzMetrics.


  4. Leon Hudson,

    You might be interested in the following links which are related to document topic pattern recognition (text mining/classification). They would be useful for your product development that makes it superior to your competitors. The documents detail various state-of-the-art algorithms that would be useful, in document concept detection as opposed to key-word detection. I specialize in data-mining, numerical computing, statistics & machine learning, where text-mining is a huge topic at the moment (you can tell that by the number of new peer review papers that have been submitted and accepted for publication in various computing journals).

    #1) Mining Text for Word Senses Using Independent Component Analysis (ICA)

    #2) Signal Detection Using ICA: Application to Chat Room Topic Spotting

    #3) Combining Topic Models and Social Networks for Chat Data Mining

    #4) Text Mining Non-Negative Matrix Factorization (NNMF)

    #5) Using Linear Algebra for Intelligent Information Retrieval

    #6) A Text Mining and Support Vector Machine (SVM) Approach

    There are more, but I think that the ones above are the current most popular ones. ICA originated in signal processing (speech processing , etc) but its application has exploded into general areas of data analysis such as image recognition (medical application), product recommendation engine (the one that Amazon is using – customers who bought item A also bought item B, etc…). NNMF algorithm originally developed for face recognition application but has now found application in wider domains (speech processing, image processing, product recommendation engine, search engine, etc…).

    The SVD (singular value decomposition) algorithm described in ref #6) is available from this freely Java package below (JAMA):

    JAMA : A Java Matrix Package

    I use Jama for all my development. A recent project that I got involved, and the development was in Microsoft .NET, so I just saved the Java files into J# files (Microsoft Java), and it worked perfectly.

    You might also, want to take a look, at the popular open source data-mining project from Waikato University, called WEKA. There are lots of pattern recognition algorithms in that project that you could use, including the SVM described in ref #6).

    There is no doubt vendors that incorporate state-of-the-art pattern recognition algorithms into web analytic tool such as yours, would win business. Google News is using text-mining for text-summarization, but I don’t know which algorithm Google is using , since there are many algorithms for text-mining available in the todays literatures, where they do vary in accuracy and speed.

    I believe that if you develop text-mining capability into your system, then it would be far superior than say BlogPulse or BuzzMetrics.

    If you want me to point out more info on text-mining, then I am happy to do that.


  5. Falafulu Fisi

    Thank you for your extensive comment I have subscribed to the Weka data mining mailing list out of curiosity. However, I do not believe that an algorithm can be written to put automated concept detection over an above the combined strengths of keyword detection and human ingenuity. Only a human reader can filter to the accuracy level that we require as a media monitoring agency.

    If you happen to be passing Onehunga do look me up


  6. I do not believe that an algorithm can be written to put automated concept detection over an above the combined strengths of keyword detection and human ingenuity.

    Inventing & development of new technologies is not static, but recent advances in computational linguistics, statistics, mathematics, have pushed the limit further. I have seen some publications in text-mining that algorithms outperformed human classifiers in terms of lower miss-classification errors. Goldman’s has a startup, Inform Technologies that heavily uses text-mining.

    Most academic publishers that I know of today, do use text-mining for summarization of authors submitted publication. Eg: Elsevier , use an automated document summarizer to summarize thousands of submitted papers they receive every day. They use to have a platoon of human reviewers to read each and every submitted paper and deciding which one to be accepted for publication in their journals and which ones to be rejected. They have a huge number of academic journals. This practice stopped in early 2000, since the costs of hiring reviewers exploded due to increased number of submissions and also new journals they published. It also took longer for human reviewers to read those submitted publications. In early 200 they adopted an automated summarizer and the accuracy of the system has improved dramatically since then. Authors are encouraged to submit their papers without an Abstract (similar to an exec summary). The text-mining summarizer scans the submitted papers, extracted important concepts, and summarizes those into a coherent short version that captures the main concepts of the whole document. These generated Abstracts are automatically sent back to the authors themselves (submitters) to see if they want to change (add or delete) anything, but about 98% were just happy about the auto-generated Abstracts. Authors who submitted their papers with generated Abstracts already included, it turned out that they accepted to use the auto-generated Abstracts and reject their own written Abstracts as they see that automated ones are much better. Abstracts are the versions that are scanned by peer reviewers to see if the concepts in the publication are of interest and original. Once they find the Abstracts interesting, they then read the whole paper. Reviewers usually reject certain publications based on the Abstract only without reading the whole paper.

    Text-mining is used, when there is insufficient human resources or too costly to do the reading and scanning of huge textual information available. If one can afford to have a platoon of human scanners to scour the vast textual information on the internet to summarize information, then that’s fine. However most organizations can’t afford to do that, even with a well-established publisher such as Elsevier know they can’t afford to employ a large number of human reviewers. The CIA, NSA and government securities agencies are already adopting text-mining. I have seen one publication about a huge retailer in the US that deploys text-mining summarizer on their competitors’ website to monitor their prices in real time. When the prices are detected to have change, then management is also alerted as well. The sales division used to go thru their competitors web site on a regular basis, but they found out that prices are frequently changes that they couldn’t keep up with monitoring them.

    If you happen to be passing Onehunga do look me up

    Ok, next time.

    PS: I am developing something similar (although it is not a core function of my application which is financial analytics) to scan financial market related news sites and summarize those then automatically alert the interested investors in real time (via a mobile device). This is not new in the area of finance, since Inform Technologies and others have jumped into it. Text-mining is starting to appear in application in health science – medical applications.

    Just watch out for the next new upcoming Buzz term, text-mining for web 3.0


  7. Here is another tool call Eigen-trend for analysis of blogosphere. This tool is based on the SVD (singular value decomposition) that I mentioned in my previous message. There are a few variants of SVD which have been published and made available today from the literatures. Eigen-trend uses HOSVD (Higher Order SVD) variant. The standard SVD is available in the freely available JAMA package. The WEKA team has already bundled JAMA into WEKA open source software, so it now has SVD.

    The blogosphere – the totality of blog-related Web sites – has become a great source of trend analysis in areas such as product survey, customer relationship, and marketing. Existing approaches are based on simple counts, such as the number of entries or the number of links. In this paper, we introduce a novel concept, coined eigen-trend, to represent the temporal trend in a group of blogs with common interests and propose two new techniques for extracting eigen-trends in blogs. First, we propose a trend analysis technique based on the singular value decomposition. Extracted eigen-trends provide new insights into multiple trends on the same keyword. Second, we propose another trend analysis technique based on a higher-order singular value decomposition. This analyzes the blogosphere as a dynamic graph structure and extracts eigen-trends that reflect the structural changes of the blogosphere over time. Experimental studies based on synthetic data sets and a real blog data set show that our new techniques can reveal a lot of interesting trend information and insights in the blogosphere that are not obtainable from traditional count-based methods.

    The full paper (PDF or PS) is freely downloadable from the following link:

    Eigen-Trend: Trend Analysis in the Blogosphere Based on Singular Value Decompositions

    I have no doubt that BuzzMetrics tool mentioned by Glen Barnes in his previous message does use SVD as its core engine. The description of BuzzMetric’s capability looks SVD to me, even though it doesn’t say what they’re using.


Comments are closed.