Archive for Business models

Managing and Monetizing Paid, Owned, and Earned Content

How does your organization manage and value paid, owned, and earned content? Is there a strategy for each of the three types? A budget?

If you are struggling with measuring the value of marketing related content you are certainly not alone – there is just no easy way to do it. In this session, Gerry Moran from SAP talks about the need for brands to manage and scale the three types of content together to engage customers throughout the sales cycle. Randy Woods from nonlinear creations describes a technique in use for modeling content and mapping it to online behaviors to get a better handle on content marketing costs and return.

Join us Wednesday, December, 3: 11:40 a.m. – 12:40 p.m. at the Gilbane Conference to learn more.

C9. Managing and Monetizing Paid, Owned, and Earned Content

Moderator:
Dom Nicastro, Staff Reporter, CMSWire.com

Speakers:
Gerry Moran, Head of Social Media, North America, SAP
Scaling and Monetizing Paid, Owned, and Earned Media in Your Organization
Randy Woods, President, nonlinear creations
Of Metrics and Models: Measuring the ROI of Content Marketing

See the complete conference schedule.

More on Microblogging: Evolution of the Enterprise Market

Following my post last week on the need for additional filters in enterprise microblogging tools and activity streams, I participated in an interesting Twitter conversation on the subject of microblogging and complexity. The spontaneous conversation began when Greg Lowe, a well-respected Enterprise 2.0 evangelist at Alcatel-Lucent, asked:

"Can stand alone micro-blogging solutions survive when platform plays introduce the feature?"

I immediately replied:

"Yes, if they innovate faster"

Greg shot back:

"is microblogging autonomy about innovation, or simple elegance? More features usually leads to lower usability?"

And, later, he asked a complementary question:

"is there a risk of Microblogging becoming "too complicated"?"

Is Greg on to something here? Do more features usually lead to lower usability? Will functional innovation be the downfall of stand-alone microblogging solutions, or will it help them stay ahead of platform vendors as they incorporate microblogging into their offerings?

One of the commonly heard complaints about software in general, and enterprise software in particular, is that it is too complicated. There are too many features and functions, and how to make use of them is not intuitive. On the other hand, usability is a hallmark of Web 2.0 software, and, if we make it too complex, it is likely that some people will abandon it in favor of simpler tools, whatever those may be.

But that dichotomy does not tell the entire story. Based on anecdotal evidence (there is no published quantitative research available), early adopters of Web 2.0 software in the enterprise appear to value simplicity in software they use. However, as a colleague, Thomas Vander Wal, pointed out to me yesterday, that may not be true for later, mainstream adopters. Ease-of-use may be desirable in microblogging (or any other) software, but having adequate features to enable effective, efficient usage is also necessary to achieve significant adoption. Later adopters need to see that a tool can help them in a significant way before they will begin to use it; marginal utility does not sway them, even if the tool is highly usable.

Simple may not be sustainable. As I wrote last week in this post, as enterprise use of microblogging and activity streams has increased and matured, so has the need for filters. Individuals, workgroups, and communities want to direct micro-messages to specific recipients, and they need to filter their activity streams to increase their ability to make sense out of the raging river of incoming information. Those needs will only increase as more workers microblog and more information sources are integrated into activity streams.

In the public microblogging sphere, Twitter provides a solid example of the need to add functionality to a simple service as adoption grows in terms of registered users and use cases. As more individuals used Twitter, in ways that were never envisioned by its creators, the service responded by adding functionality such as search, re-tweeting, and lists. Each of these features added some degree of complexity to the service, but also improved its usability and value.

In the evolution of any software, there is a trade-off between simplicity and functionality that must be carefully managed. How does one do that? One way is to continuously solicit and accept user feedback. That allows the software provider and organizations deploying it to sense when they are nearing the point where functionality begins to overwhelm ease of use in a harmful manner. Another technique is to roll out new features in small doses at reasonable intervals. Some even advocate slipping new features in unannounced and letting users discover them for themselves. Hosted deployment of software (whether on-premise or off-site) makes this easier to do, since new features are automatically switched on for people using the software.

So back to the original question; can stand-alone microblogging solutions fend off the collaboration suite and platform vendors as they incorporate microblogging and activity streams in their offerings? My definitive answer is "yes", because there is still room for functionality to be added to microblogging before it becomes over-complicated.

Based on the historical evolution of other software types and categories, it is likely that the smaller vendors, who are  intensely focused on microblogging, will be the innovators, rather than the platform players. As long as vendors of stand-alone microblogging offerings continue to innovate quickly without confusing their customers, they will thrive. That said, a platform vendor could drive microblogging feature innovation if they so desired; think about what IBM has done with its Sametime instant messaging platform. However, I see no evidence of that happening in the microblogging sphere at this time.

The most plausible scenario is that at some point, small, focused vendors driving microblogging innovation (e.g. Socialcast, Yammer) will be acquired by larger vendors, who will integrate the acquired features into their collaboration suite or platform. My sense is that we are still 2-3 years away from that happening, because there is still room for value-producing innovation in microblogging.

What do you think?

Observations from Gilbane Boston 2009

The 2009 version of the Gilbane Boston conference was held last week. It was the second one I have attended and my first as a track coordinator (I designed the Collaboration and Social Software track and made it happen.) The event was well attended (c. 1100 people) and the number of sponsors and exhibitors was up significantly from last year’s Boston conference. Many of the sessions I attended offered valuable insights from speakers and audience members. All in all, I would label the conference a success.

The Collaboration and Social Software track sessions were designed to minimize formal presentation time and encourage open discussion between panelists and audience members instead. Each session focused on either a common collaboration challenge (collaborative content authoring, content sharing, fostering discussions, managing innovation) or on a specific technology offering (Microsoft SharePoint 2010 and Google Wave.) The sessions that dealt with specific technologies produced more active discussion than those that probed general collaboration issues. I am not sure why that was the case, but the SharePoint and Wave sessions spawned the level of interactivity that I had hoped for in all the panels. The audience seemed a bit reticent to join in the others. Perhaps it took them a while to warm up (the SharePoint and Wave sessions were at the end of the track.)

Here are some other, high level observations from the entire Gilbane Boston 2009 conference:

Twitter: Last year (and at Gilbane San Francisco in June 2009) attendees were buzzing about Twitter, wondering what it was and how it could be used in a corporate setting. This year the word “Twitter” was hardly uttered at all, by presenters or attendees. Most audience members seemed to be fixated on their laptop or smartphone during the conference sessions, but the related tweet stream flow was light compared to other events I’ve attended this quarter. The online participation level of folks interested in content management seems to mirror their carbon form patterns. Most are content to listen and watch, while only a few ask questions or make comments. That is true across all audiences, of course, but it seemed especially pronounced at Gilbane Boston.

SharePoint 2010: This topic replaced Twitter as the ubiquitous term at Gilbane Boston. If I had a dollar for every time I heard “SharePoint” at the conference, I would be able to buy a significant stake in Microsoft! Every company I consulted with during the event was seeking to make SharePoint either their primary content management and collaboration platform, or a more important element in their technology mix. Expectations for what will be possible with SharePoint 2010 are very high. If Microsoft can deliver on their vision, they will gain tremendous share in the market; if not, SharePoint may well have seen its zenith. Everything that I have heard and seen suggests the former will occur.

Google Wave: This fledgling technology also generated substantial buzz at Gilbane Boston. The session on Wave was very well attended, especially considering that it was the next-to-last breakout of the conference. An informal poll of the session audience indicated that nearly half have established a Wave account. However, when asked if they used Wave regularly, only about 20% of the registered users responded affirmatively;. Actual participation in the Wave that I created for attendees to take notes and discuss the Collaboration track online underscored the poll results. Most session attendees said they see the potential to collaborate differently, and more effectively and efficiently, in Wave, but cited many obstacles that were preventing them from doing so at this time. Audience members agree that the Wave user experience has a long way to go; functionality is missing and the user interface and features that are there are not easy to use. Most attendees thought Wave’s current shortcomings would be improved or eliminated entirely as they product matures. However, many also noted that collaboration norms within their organization would have to change before Wave is heavily adopted.

Open Source: This was the hot topic of the conference. Everyone was discussing open source content management and collaboration software. An informal poll of the audience at the opening keynote panel suggested that about 40% were using open source content management software. Many of the other attendees wanted to learn more about open source alternatives to the proprietary software they have been using. Clients that I met with asked questions about feature availability, ease of use, cost benefits, and financial viability of providers of open source content management and collaboration software. It was clear that open source is now considered a viable, and perhaps desirable, option by most organizations purchasing enterprise software.

My big take-away from Gilbane Boston 2009 is that we are experiencing an inflection point in the markets for enterprise content management and collaboration software. Monolithic, rigid, proprietary solutions are falling out of favor and interest in more lightweight, flexible, social, open source offerings is rapidly growing. I expect that this trend will continue to manifest itself at Gilbane San Francisco in June 2010, and beyond.

The Impending Enterprise 2.0 Software Market Consolidation

Talk about a trip down memory lane…  Another excellent blog post yesterday by my friend and fellow Babson College alum, Sameer Patel, snapped me back a few years and gave me that spine tingling sense of deja vu.

Sameer wrote about how the market for Enterprise 2.0 software may evolve much the same way the enterprise portal software market did nearly a decade ago. I remember the consolidation of the portal market very well, having actively shaped and tracked it daily as an analyst and consultant. I would be thrilled if the E2.0 software market followed a similar, but somewhat different direction that the portal market took. Allow me to explain.

When the portal market consolidated in 2002-2003, some cash-starved vendors simply went out of business. However, many others were acquired for their technology, which was then integrated into other enterprise software offerings. Portal code became the UI layer of many enterprise software applications and was also used as a data and information aggregation and personalization method in those applications.

I believe that much of the functionality we see in Enterprise 2.0 software today will eventually be integrated into other enterprise applications. In fact, I would not be surprised to see that beginning to happen in 2010, as the effects of the recession continue to gnaw at the business climate, making it more difficult for many vendors of stand-alone E2.0 software tools and applications to survive, much less grow.

I hope that the difference between the historical integration of portal technology and the coming integration of E2.0 functionality is one of method. Portal functionality was embedded directly into the code of existing enterprise applications. Enterprise 2.0 functionality should be integrated into other applications as services. Service-based functionality offers the advantage of writing once and using many times.  For example, creating service-based enterprise micro-messaging functionality (e.g. Yammer, Socialcast, Socialtext Signals, etc.) would allow it to be integrated into multiple, existing enterprise applications, rather than being confined to an Enterprise 2.0 software application or suite.

The primary goals of writing and deploying social software functionality as services are: 1) to allow enterprise software users to interact with one another without leaving the context in which they are already working, and 2) to preserve the organization’s investment in existing enterprise applications. The first is important from a user productivity and satisfaction standpoint, the second because of its financial benefit.

When the Enterprise 2.0 software market does consolidate, the remaining vendors will be there because they were able to create and sell:

  • a platform that could be extended by developers creating custom solutions for large organizations,
  • a suite that provided a robust, fixed set of functionality that met the common needs of many customers, or
  • a single piece or multiple types of service-based functionality that could be integrated into either other enterprise application vendors’ offerings or deploying organizations’ existing applications and new mashups

What do you think? Will history repeat itself or will the list of Enterprise 2.0 software vendors that survived the impending, inevitable market consolidation consist primarily of those that embraced the service-based functionality model?

Box.net Offers Proof of Its New Enterprise Strategy

box_logo.gifBox.net announced today that it has integrated its cloud-based document storage and sharing solution with Salesforce.com. Current Box.net customers that want to integrate with Salesforce CRM can contact Box.net directly to activate the service. Salesforce.com customers may now download Box.net from the Salesforce.com AppExchange.

Box.net services will now be available in the Lead, Account, Contact, and Opportunity tabs of Salesforce CRM. In addition, the Box.net native interface and full range of services will be accessible via a dedicted tab on the Salesforce CRM interface. Users can upload new files to Box.net, edit existing files, digitally sign electronic documents, and e-mail or e-fax files. Large enterprise users will be given unlimited Box.net storage. The Box.net video embedded below briefly demonstrates the new Salesforce CRM integration.

While Box.net started as a consumer focused business, today’s announcement marks the first tangible manifestation of its emerging enterprise strategy. Box.net intends to be a cloud-based  document repository that can be accessed through a broad range of enterprise applications.

The content-as-a-service model envisioned by Box.net will gain traction in the coming months. I believe that a centralized content repository, located on-premise or in the cloud, is a key piece of any enterprise’s infrastructure. Moreover, content services — functionality that enables users to create, store, edit, and share content — should be accessible from any enterprise application, including composite applications such as portals or mashups created for specific roles (e.g. sales and/or marketing employees, channel partners, customers). Users should not be required to interact with content only through dedicated tools such as office productivity suites and Content Management Systems (CMS).

Other content authoring and CMS software vendors are beginning to consider, understand, and (in some cases) embrace this deployment model. Box.net is one of the first proprietary software vendors to instantiate it. Adoption statistics of their new Salesforce CRM integration should eventually provide a good reading as to whether or not enterprise customers are also ready to embrace the content-as-a-service model.

Integration of Social Software and Content Management Systems: The Big Picture

Jive Software’s announcement last week of the Jive SharePoint Connector was met with a "so what" reaction by many people. They criticized Jive for not waiting to make the announcement until the SharePoint Connector is actually available later this quarter (even though pre-announcing product is now a fairly common practice in the industry.) Many also viewed this as a late effort by Jive to match existing SharePoint content connectivity found in competitor’s offerings, most notably those of NewsGator, Telligent, Tomoye, Atlassian, Socialtext, and Connectbeam.

Those critics missed the historical context of Jive’s announcement and, therefore, failed to understand its ramifications. Jive’s SharePoint integration announcement is very important because it:

  • underscores the dominance of SharePoint in the marketplace, in terms of deployments as a central content store, forcing all competitors to acknowledge that fact and play nice (provide integration)
  • reinforces the commonly-held opinion that SharePoint’s current social and collaboration tools are too difficult and expensive to deploy, causing organizations to layer third-party solution on top of existing SharePoint deployments
  • is the first of several planned connections from Jive Social Business Software (SBS) to third-party content management systems, meaning that SBS users will eventually be able to find and interact with enterprise content without regard for where it is stored
  • signals Jive’s desire to become the de facto user interface for all knowledge workers in organizations using SBS

The last point is the most important. Jive’s ambition is bigger than just out-selling other social software vendors. The company intends to compete with other enterprise software vendors, particularly with platform players (e.g. IBM, Microsoft, Oracle, and SAP), to be the primary productivity system choice of large organizations. Jive wants to position SBS as the knowledge workers’ desktop, and their ability to integrate bi-directionally with third-party enterprise applications will be key to attaining that goal.

Jive’s corporate strategy was revealed in March, when they decreed a new category of enterprise software — Social Business Software. Last week’s announcement of an ECM connector strategy reaffirms that Jive will not be satisfied by merely increasing its Social Media or Enterprise 2.0 software market share. Instead, Jive will seek to dominate its own category that bleeds customers from other enterprise software market spaces.

Google Wave Protocols: Clearing the Confusion

Today is the long-awaited day when 100,000 lucky individuals receive access to an early, but working, version of Google Wave. I hope I am in those ranks! Like many people, I have been reading about Wave, but have not been able to experience it hands-on

Wave has been a hot topic since it was first shown outside of Google last May. Yet it continues to be quite misunderstood, most likely because it is such an early stage effort and most interested people have not been able to lay hands on the technology. For that very reason, Gilbane Group is presenting a panel entitled Google Wave: Collaboration Revolution or Confusion? at the Gilbane Boston conference, on December 3rd.

The confusion surrounding Wave was highlighted for me yesterday in a Twitter exchange on the topic. It all started innocently enough, when Andy McAfee asked:

Andy1

To which I replied:

Larry1

That statement elicited the following comment from Jevon MacDonald of the Dachis Group:

Jevon1

I am not a technologist. I seek to understand technology well enough that I can explain it in layman’s terms to business people, so they understand how technology can help them achieve their business goals. So I generally avoid getting into deep technical discussions. This time, however, I was pretty sure that I was on solid ground, so the conversation between me and Jevon continued:

Larry2

Larry3

Jevon2

Now, here we are, at the promised blog post. But, how can Jevon and I both be correct? Simple. Google Wave encompasses not one, but several protocols for communication between system components, as illustrated in the figure below.

wave_protocols

Figure 1: Google Wave Protocols (Source: J. Aaron Farr,

The most discussed of these is the Google Wave Federation protocol, which is an extension of the Extensible Messaging and Presence Protocol (XMPP). However, Wave also requires protocols for client-server and robot server- (Web service) Wave server communication. It is also possible, but probably not desirable, for Wave to utilize a client-client protocol.

Jevon was absolutely correct about the XMPP protocol enabling server-server communication in the Google Wave Federation Protocol. The Draft Protocol Specification for the Google Wave Federation Protocol lays out the technical details, which I will not explore here. XMPP provides a reliable mechanism for server-server communication and is a logical choice for that function in Google Wave, because XMPP was originally designed to transmit instant message and presence data.

It turns out that the Google Wave team has not defined a specific protocol to be used in client-server communication. A Google whitepaper entitled Google Wave Data Model and Client-Server Protocol does not mention a specific protocol. The absence of a required or recommended protocol is also confirmed by this blog post. While the Google implementation of Wave does employ HTTP as the client-server protocol, as Jevon stated, it is possible to use XMPP as the basis for client-server communication, as I maintained. ProcessOne demonstrates this use of XMPP in this blog post and demo.

Finally, there is no technical reason that XMPP could not be used to route communications directly from one client to another. However, it would not be desirable to communicate between more than two clients via XMPP. Without a server somewhere in the implementation, Wave would be unable to coordinate message state between multiple clients. In plain English, the Wave clients most likely would not be synchronized, so each would display a different point in the conversation encapsulated in the Wave.

To summarize, Google Wave employs the following protocols:

  • XMPP for server-server communication
  • HTTP for client-server communication in the current Google implementation; XMPP is possible, as demonstrated by ProcessOne
  • HTTP (JSON RPC) for robot server-Wave server communication in the current Google implementation
  • Client-client protocol is not defined, as this mode of communication is most likely not usable in a Wave

I hope this post clarifies the protocols used in the current architecture of Google Wave for you. More importantly, I hope that it highlights just how much additional architectural definition needs to take place before Wave is ready for use by the masses. If I had a second chance to address Andy McAfee’s question, I would unequivocally state that Google Wave is a “concept car” at this point in time.

Postscript: The heretofore mentioned possibilities around XMPP as a client-client protocol are truly revolutionary.
The use of XMPP as the primary communication protocol for the Internet, instead of the currently used HTTP protocol, would create a next generation Internet in which centralized servers would no longer serve as intermediaries between users. Web application architectures, even business models, would be changed. See this post for a more detailed explanation of this vision, which requires each user to run a personal server on their computing device.

Digital Publishing Visionary Profile: Cengage’s Ken Brooks

 

Ken Brooks is senior vice president, global production and manufacturing services at Cengage Learning (formerly Thomson Learning) where his responsibilities include the development, production, and manufacturing of textbooks and reference content in print and digital formats across the Academic and Professional Group, Gale, and International divisions of Cengage Learning. Prior to his position at Cengage Learning, Ken was president and founder of publishing Dimensions, a digital content services company focused in the eBook and digital strategy space. Over the course of his career, Ken founded a Philippines-based text conversion company; a public domain publishing imprint; and a distribution-center based print-on-demand operation and has worked in trade, professional, higher education and K-12 publishing sectors. He has held several senior management positions in publishing, including vice president of digital content at Barnes & Noble, vice president of operations, production, and strategic planning at Bantam Doubleday Dell, and vice president of customer operations at Simon & Schuster. Prior to his entry into publishing, Ken was a senior manager in Andersen Consulting’s logistics strategy practice.

 

This interview is part of our larger study on digital publishing.

 

Read More →