Curated for content, computing, and digital experience professionals

Category: Collaboration and workplace (Page 52 of 94)

This category is focused on enterprise / workplace collaboration tools and strategies, including office suites, intranets, knowledge management, and enterprise adoption of social networking tools and approaches.

“It’s Not Not About the Technology”

Thank you Andrew.

Andrew McAfee has a thoughtful post (“It’s Not Not About the Technology”) on a topic I’ve often bitten my tongue about, i.e., the (often smugly delivered) phrase “It’s not about the technology”. And of course the context is a discussion about applying technology to a business application, which should by definition, imply that both technology capabilities and business requirements need to be part of the “about”.

It is common for one or the other to be overly emphasized to ill effect. Perhaps because of my technical background, I am more sensitive to the use of this phrase in situations where the utterer is covering up for a lack of knowledge or fear of technology or change.

You simply can’t make good business decisions that involve technology without understanding what the technology can and can’t effectively do – business requirements need to be expanded or contracted based on what is possible and feasible if you want your IT investments to be successful and competitive.

Often the largest benefit of a piece of software is a little known (even to the vendor) feature that happens to allow for, e.g., a process improvement that would be a requirement if you knew it was possible. See what Andrew has to say.

What’s Web 2.0 All About? Let’s Start with the Infrastructure

Jacques Bughin and James Manyika at McKinsey have just published another thought-provoking report, “How Businesses Are Using Web 2.0.” They’re working with a beefy survey (2,847 executives worldwide, 44% who hold C-level positions), supplemented by an online discussion. Their conclusions:
“Expressing satisfaction with their Internet investments so far, [respondents] say that Web 2.0 technologies are strategic and that they plan to increase these investments. But companies aren’t necessarily relying on the best-known Web 2.0 trends, such as blogs; instead they place the greatest importance on technologies that enable automation and networking.” Companies that are using Web 2.0 technologies “have developed a new way of bringing technology into business . . . This new approach is easier to implement and more flexible than traditional top down approaches.”
Wow, so at the end of the day, our focus on Collaboration 2.0 is all about developing new, more flexible models for deploying business applications! Could it be that this year’s two-dot-oh rave is all about light-weight development models?
Well, perhaps — this is certainly the underlying logic of Google’s $625 million acquisition of Postini, an email security and management company, consumated today. As today’s NYTimes observed, “The deal underscores Google’s ambitions to become a serious player in the business of selling software to companies and organizations, in competition with Microsoft and others.” But as much as I am a fan of Gmail, I don’t think it is going to be the Exchange killer. There’s still going to be a big market for enterprise applications, enterprise email included. (I’m not sure how I would feel about my doctors using Gmail in the hospital when discussing my health and well being.)
Here’s the issue. We can rely on the Internet and the Web to provide an ever increasing set of business services. Some are advertising supported (such as Gmail); others are subscription based (such as Salesforce, Foldera, and Longjump). The savvy Web 2.0 vendors — including Google, Amazon, Facebook, Yahoo, and various startups — are rapidly exposing their APIs for meaningful mashups, each promising to anchor a vibrant ecosystem for content sharing and collaboration. We’re going to have a lot more flexibility. Yet as business leaders and technology visionaries, we are going to have to plan very carefully how we will use the Internet’s power of instant connectivity for our strategic business advantage.
We often forget the amount of infrastructure we have readily available when we go online and exchange email. DNS, TCP/IP, some basic security, SMTP, HTTP — it all works now; earlier generations of technologies limited the scope of these services to enterprise islands. (Remember DECnet, PROFS, and WangNet.) Now we’re concerned about privacy, security, findability, records retention, manageability, and a whole host of knotty systems infrastructure issues.
The challenge facing the collaboration ecosystem vendors for the enterprise (whoever they may be) is to build the infrastructure required to support the “automation and networking” environments that the McKinsey execs have in mind. The jury is still to be convened.
I bet that we’re going to be hearing a lot from the mainstream enterprise vendors–Microsoft, Oracle, IBM, SAP, EMC, Cisco–by the end of the year. With our continuing focus on enterprise collaboration and social computing, we are going to have to pay even more attention to the infrastructure services available over corporate intranets, partner extranets, and the public Internet. It remains to be seen how flexible these loosely coupled core services can be–and how they can be packaged into vibrant business propositions.

Free or Ambient?

It has been a week since the O’Reilly Tools of Change conference adjourned. The topics and presentations were provocative and there are sure to be some lively debates continuing on for months to come…

Several of the keynotes centered on the theme that “information wants to be free”.

Chris Anderson, Editor in Chief of Wired and author of the best seller, “The Long Tail”, was one of the early keynoters. He started the debate by announcing that the title of his next book would be “Free” and would focus upon making a case for providing content to consumers for no charge.

Towards the end of the first day, Jimmy Wales, President of the Wikimedia foundation, spoke about his new company called Wikia. His goal is for Wikia to do for the rest of the library what Wikipedia did for the Encyclopedia section and to make the assembled knowledge of the world available to the masses for free. And by the way, he’s going to produce a free Google competitor at the same time. (He certainly doesn’t lack for ambition.)

Erin McKean of the Oxford University Press closed the conference with a vivid discussion of “book-shaped objects” and openly questioned whether books were the best information package for the future. As a lexicographer, she weighed in on the “free” debate by saying that it would be better to state that all “information wants to be ambient”. She indicated that in this sense ambient means readily available for use. (Because I always had associated the word ambient with sound, lighting, or atmosphere, I checked several other dictionaries to clarify this sense of ambient. It did not appear. When I went to OED online, I found that their definition of ambient was neither free nor ambient but available for a mere $29.95/month. Because, Erin has yet to post her presentation on the O’Reilly site, you’ll have to trust my memory for this definition.) Her point is well taken; the word “free” is simply to vague. The American Heritage Dictionary (AHD) lists 17 different definitions or senses ranging from “matters of liberty” to “lack of restraint” to “lack of encumbrance” to “provided without consideration or reward”.

Mr. Anderson’s talk seemed to stress the economic meaning of “free”. . Thus, we must assume that he means that information or content should be available without consideration or reward. The popular justification for this approach to content valuation is that because cost of digital distribution is negligible, it is unfair to charge for content or information that is essentially free. If the distribution cost method were used to price traditional book content, the cost of printing, paper and binding would be the determining factor. Of course, that’s not the case. That approach omits many of the costs of publishing content including: reviewing, editing, formatting, proofreading, publicizing, selling, and marketing. And, of course, most authors like to be paid royalties for their work. It also neglects to consider the notion that many readers prefer traditional book formats and therefore many content elements will eventually be published in several media formats.

Mr. Anderson is first to admit, his book entitled “Free” won’t actually be free. Okay, the audio book or e-book may be free to those who purchase the printed book.. But if you want to purchase only the e-book or the audio book, you’ll be expected to pay for it. Mr. Anderson is also exploring various forms of online books or printed books that would either be sponsored by corporations or supported by (gasp) advertising. While some of these offerings might be free to the reader, it seems that there will be considerations and rewards built somewhere into this project.

He went on to say that he might well be in favor of making the book free because of the publicity benefits to his consulting practice, speaking fees, and the promotion value to his “personal brand”. However, he felt that his publisher would likely object because they are in the business of generating sales and revenues from selling books. (After the session, I heard several people suggest that Mr. Anderson could self-publish his new work and then he would be free to make it free).

Based upon his work at Wikipedia, Mr. Wales could be seen as a developer of truly free content. The governing organization is a charitable foundation and all of the authors are volunteers. The costs of supporting the staff and technology infrastructure are paid by donations. However, during his keynote, Mr. Wales was quick to point out two major differences between Wikipedia and Wikia. The scope of Wikia is much broader and it has been organized as a for profit entity.

Wikipedia represents the “perfect storm” for a collaborative work or “peering”. According to Wikinomics by Tapscott and Williams, there are three factors necessary to make peering effective: 1) the object of production is information or culture which keeps the cost of participation low for contributors; 2) Tasks can be chunked out into bite-sized pieces that individuals can contribute in small increments and independent of other producers. This makes their overall investment of time and energy minimal in relation to the benefits they receive in return; 3) The costs of integrating those pieces into a finished end product, including the leadership and quality control mechanisms must be low.”

While these criteria would apply to a number of other works found in a library, there are many others (novels, monographs, complex texts, dissertations, etc) that don’t meet these criteria well at all and aren’t likely to be challenged by collaboratative works. The costs associated with “wikiing” the rest of the library would likely be enormous. Mr. Wales seems to be banking on advertising dollars to support this effort. If this were the model, these works might be considered free from a consumer pricing perspective but the advertising revenues and potential profits would have to be deemed consideration in economic terms.

One also wonders whether the volunteer authorship model is extensible. I can see how it would work when the author is writing about a topic that is intensely interesting ( as I am doing at this moment). However, a great deal of content is more the result of craftsmanship than inspiration. It seems unlikely that volunteers could be recruited to write “drudge works”.

And as appealing as it may be to write essays or modules on interesting topics for free, I for one would enjoy it even more if I received some consideration for my hard work. (no wise cracks please). As popular as Wikipedia is today, someday it too may face a challenge from some organization (i.e. Google or Microsoft) that might add new features or devise a revenue sharing model that provides authors with incentives.

I guess my point is that an author’s ideas and created content are products. And like other products they should have a value proposition and a go-to-market strategy. The valuation/pricing options might include: purchase, subscription, sponsorship, syndication, promotion, and free. Depending on the utility, creativity, entertainment value, uniqueness of content, etc of content, one or more of the above models might be appropriate. While ‘free’ is one option, why should it be the preferred choice? Talented people like Mssrs. Anderson and Wales create content that is very good and are entitled to receive proper consideration if they so desire. Jeff Patterson, CEO of Safari Books Online, had some fascinating data on this topic. He studied how their customers for largely technical information value their information products vs. other content some of which are “free”. He found that some quality content was indeed free. However, most “free” content was either advertising supported or was offered in exchange for specific information about the content consumer or their work projects. He asked customers about the tradeoffs that they were willing to make to obtain content for free. If I recall correctly, about ¼ of their customers would rather pay for content if the advertising was inappropriate and/or distracting. !/3 of their customers would rather pay for content than reveal any personal information other that name and e-mail address. And 2/3 of their customers would rather pay for content than reveal in depth information about a project that they were working on. (Patterson’s presentation was one of the best—I hope that he posts his slides)

Advocating for universally free content might indeed have the unwanted effect of reducing the amount of excellent content that is created by professional authors who depend on their writing as their livelihood. I think that Bruce Chizen, Adobe’s CEO, summed it up nicely when asked by Tim O’Reilly where he stood in this debate. He said that Adobe makes many large investments of human and financial capital in the inventions and products that they produce. While he is all in favor of providing some of their intellectual property to consumers, standards organizations, and society for free, he reserves the right to make the decision as to what should be free. I’m sure that his stakeholders support that position.

I’ll conclude this entry with some thoughts on the provocative concept of ambient information. For many years, content was created for a single purpose and was closely regulated against peripheral usages. With the advent of the digital era, came significant opportunities for deriving additional value from content. I will forever be grateful to the senior management of Houghton Mifflin Company who saw the wisdom of freeing the content of their dictionary from exclusively “booklike objects” and allowed linguists and software engineers to build spelling correction technology. In fact, most of the English language spelling technology that is used today was derived from their American Heritage Dictionary database. And as I described in an earlier Blog entry, Thomson’s retiring CEO Richard Harrington made information ambience central to their core strategy and judging from their financial statements, they are receiving significant consideration for their efforts!!

Making content accessible to inform, educate, and entertain people is a worthy goal, Making content more ambient will offer content creators and publishers many new opportunities to publicize their work, create goodwill, answer new questions, solve difficult problems, and the option to generate new income streams that are appropriate and commensurate with the value of the content.

What’s the Future for User Generated Content?

I was part of a Web 2.0 panel in New York City earlier this week, moderated by Bryant Shea, Director of Content Management at Molecular. It was an intimate affair – we had about thirty people in the room, drawn largely from media/entertainment, financial services, and insurance firms.

We wanted to encourage audience participation — about five minutes into the event, one person piped up, “We know that Web 2.0 is all about user generated content. We certainly agree – we’ve put up a blog for our customers. Now what do we do with all this content?” Good question – we spent the rest of the evening trying to answer.

I’m not sure we ever reached a resolution but this got me thinking. What’s equally important is the prior question – why put up a blog in the first place? Business strategies need to drive technology choices; technology options can then drive business opportunities.

Now there are certainly plenty of plausible reasons for companies to want to encourage blogging about their products and services. Building brand loyalty, supporting the fans, wanting to learn about customers experiences (both the good and the bad), facilitating a peer group who can support one another, perhaps even turning to loyal end users to help with product development – the list goes on.

Companies have many options for engaging their customers. But they first have to be open to having the conversation with them, and have some inkling of how they’ll use all the insights they acquire.

Note, an inkling is a clue or a hunch – it’s not (yet) a plan. Allowing customers to blog back, blog about, and blog with one another is only the first step in a larger process. With Web 2.0, there’s lots of room for experimentation – trying things out, and seeing what works, moving on. What’s new is the ability to link things together.

Implementers and the business managers who support them need not have a formal plan about how they’re going to use all the user generated content. What they do need, I believe, is the willingness and the time to listen, and then to figure out how best to join the conversation.

Where is the “L” in Web 2.0?

I was only able to make it into the Enterprise 2.0 conference in Boston yesterday. You can still get a demo pass for today. But I was thrilled to hear analyst, researchers, case study presenters, and yes, even vendors, drill down into one of my favorite phrases: “people, processes, and technology make it possible” and hope the mantra continues today.

Point being, obviously, that 2.0 not just about technology ;-). Its about culture, filling generation gaps, the evolution of *people* networking, and redefining community from the core of where community starts. Humans.

What I didn’t hear, however, is the “L” word — specifically language, and that bothered me. We just can’t be naive enough to think that community, collaboration, and networking on a global scale is solely English-driven. We need to get the “L” word into the conversation.

My globalization practice colleague Kaija Poysti weighs in here.

More data on Facebook users and Enterprise 2.0

Here is a chart including the data from the poll described yesterday from 500 25-34 year old facebook users combined with the results from the same poll given to 500 18-24 year old facebook users. There is certainly a difference. But the most surprising results are the extremely low expectations about the use of blogs and wikis, and even social networking software. These findings, informal as they are, would make me very nervous if I were a start-up hoping to make it by capturing the facebook generation as they stream into the workforce.

Facebook Generation on Enterprise 2.0 Collaboration Technologies

I joined facebook a few days ago to check it out and to get an idea about the approach’s relevance to enterprise applications. I need to use it some more before I reach any conclusions, but since I am at the Enterprise 2.0 Collaborative Technologies conference this week I decided to use the new facebook poll feature to see what the facebook crowd thinks about collaboration as they enter the workplace. The poll feature is limited (1 multiple choice question) but it provides direct access to the tens of millions of facebook users and you can choose from a couple of demographic options. Also, you can get the results very quickly – in my first poll I received 500 responses in about 9 hours!

I will blog about the results more later and will also include all the graphs, but in the meantime, everyone I mentioned the poll to at the conference has wanted the results, so here they are the basics:

Question: Which collaboration technologies will you use the most in your job in two years?

  • SMS text messaging 6% (30)
  • email will continue to dominate 66% (328)
  • instant messaging 16% (53)
  • facebook-like social networking tools for business 11% (53)
  • blogs and/or wikis 2% (8)

Keep in mind that the 500 responses all came from the 25-34 age group who are presumably mostly in the workforce. I just started the same poll with the 18-24 age group and will provide those results for comparison later tonight. UPDATE: The combined poll results are now available.

Between the two age groups we will have some info direct from the generation that Don Tapscott, Andrew McAfee and others are making predictions about (we refer to some of this here). This is of course a very informal poll, but interesting nonetheless. I wish I had the results in time to provide Andrew and Tom Davenport for yesterday’s debate!

« Older posts Newer posts »

© 2024 The Gilbane Advisor

Theme by Anders NorenUp ↑