Gilbane Conferences & Advisor

Curated content for content, computing, and digital experience professionsals

Category: Enterprise 2.0 (page 1 of 3)

Video: Larry Hawes Previews the Colleagues and Collaboration Track

I had a chat with Larry this week on what people can expect from the enterprise collaboration track at Gilbane Boston this year/ (Last Day of Early Registration Discounts for Hotel & Conference Passes.

I have lots of great interviews recorded & scheduled so definitely keep an eye on our feed for a variety of very smart people from a diverse set of specialties and backgrounds.

LinkedIn Signal Demonstrates The Power of Role-Based Activity Stream Filters

LinkedIn today announced Signal, a new feature (currently in beta) that lets members see an activity stream that combines LinkedIn status updates and Twitter posts from other members who have opted-in to the feature. LinkedIn has licensed the Twitter firehose to incorporate all of its members’ tweets into the site, not just tweets with the #in hashtag embedded, as is current practice.

While it is hard to imagine anyone other than corporate and independent talent recruiters will make LinkedIn their primary Twitter client, Signal does have an element that is worthy of emulation by other social networks and enterprise social software providers that incorporate an activity stream (and which of those does not these days!) That feature is role-specific filters.

I wrote previously in this post about the importance of providing filters with which individuals can narrow their activity stream. I also noted that the key is to understand which filters are needed by which roles in an organization. LinkedIn apparently gets this, judging by the screenshot pictured below.

LinkedIn Signal screenshot courtesty of TechCrunch

Notice the left-hand column, labeled “Filter by”. LinkedIn has most likely researched a sample of its members to determine which filters would be most useful to them. Given that recruiters are the most frequent users of LinkedIn, the set of filters displayed in the screenshot makes sense. They allow recruiters to see tweets and LinkedIn status updates pertaining to LinkedIn members in specific industries, companies, and geographic regions. Additionally, the Signal stream can be filtered by strength of connection in the LinkedIn network and by post date.

The activity stream of every enterprise social software suite (ESS) should offer such role-based filters, instead of the generic ones they currently employ. Typical ESS filtering parameters include individuals, groups or communities, and workspaces. Some vendors offer the ability to filter by status as a collaborator on an object, such as a specific document or sales opportunity. A few ESS providers allow individuals to create custom filters for their activity stream. While all of these filters are helpful, they do not go far enough in helping individuals narrow the activity stream to view updates needed in a specific work context.

The next logical step will be to create standard sets of role-based filters that can be further customized by the individuals using them. Just as LinkedIn has created a filter set that is useful to recruiters, ESS providers and deploying organizations must work together to create valuable filter sets for employees performing specific jobs and tasks. Doing so will result in increased productivity from, and effectiveness of, any organization’s greatest asset – it’s people.

Should you fly without a pilot?

Last week Andrew McAfee wrote a blog post entitled Drop the Pilot wherein he discusses the challenges associated with piloting Enterprise 2.0 tools, and then arrives at the conclusion that we should abandon pilots altogether for such implementations and go as broad as possible right away.  As much as I hate to, I respectfully disagree.

Call me a cynic, but when I hear suggestions which go against my gut and break some very fundamental principles, such as the need to proactively manage change as well as risk, I tend to stand back and watch others jump off the bridge to see what happens before i even think about stepping to the edge. As technologists, we are innovating at a rapid pace and paradigms are constantly shifting around us, but we need to be cautious about

I do agree that E2.0 projects pose unique challenges, one of which is that their effectiveness is often [but not always] tied proportionally to the number of users in the ‘system’ (e.g. with microblogs…try launching one with only 100 diverse people in your test group and see well it takes off. Hint: it won’t). I also agree that it’s been universally accepted that “pilot” = “small”, and that this characterization, by definition, hinders the chances of success for an E2.0 pilot. But the ‘aha’ here should not be that we should start throwing caution to the wind and launching new tools across our organizations.

Filtering Microblogging and Activity Streams

The use of microblogging and activity streams is maturing in the enterprise. This was demonstrated by recent announcements of enhancements to those components in two well-regarded enterprise social software suites.

On February 18th, NewsGator announced a point release to its flagship Enterprise 2.0 offering, Social Sites 3.1. According to NewsGator, this release introduces the ability for individuals using Social Sites to direct specific microblogging posts and status updates to individuals, groups, and communities. Previously, all such messages were distributed to all followers of the individual poster and to the general activity stream of the organization. Social Sites 3.1 also introduced the ability for individuals to filter their activity streams using “standard and custom filters”.

Yesterday (March 3rd), Socialtext announced a major new version of its enterprise social software suite, Socialtext 4.0. Both the microblogging component of Socialtext’s suite and its stand-along microblogging appliance now allow individuals to broadcast short messages to one or more groups (as well as to the entire organization and self-selected followers.) Socialtext 4.0 also let individuals filter their incoming activity stream to see posts from groups to which they belong (in addition to filtering the flow with the people and event filters that were present in earlier versions of the offering.)

The incorporation of these filters for outbound and incoming micro-messages are an important addition to the offerings of NewsGator and Socialtext, but they are long overdue. Socialcast has offered similar functionality for nearly two years and Yammer has included these capabilities for some time as well (and extended them to community members outside of an organization’s firewall, as announced on February 25th.) Of course, both Socialcast and Yammer will need to rapidly add additional filters and features to stay one step ahead of NewsGator and Socialtext, but that represents normal market dynamics and is not the real issue. The important question is this:

What other filters do individuals within organizations need to better direct microblogging posts and status updates to others, and to mine their activity streams?

I can easily imagine use cases for location, time/date, and job title/role filters. What other filters would be useful to you in either targeting the dissemination of a micro-message or winnowing a rushing activity stream?

One other important question that arises as the number of potential micro-messaging filters increases is what should be the default setting for views of outgoing and incoming messages? Should short bits of information be sent to everyone and activity streams show all organizational activity by default, so as to increase ambient awareness? Perhaps a job title/role filter should be the default, in order to maximize the focus and productivity of individuals?

There is no single answer other than “it depends”, because each organization is different. What matters is that the decision is taken (and not overlooked) with specific corporate objectives in mind and that individuals are given the means to easily and intuitively change the default target of their social communications and the pre-set lens through which they view those of others.

The Impending Enterprise 2.0 Software Market Consolidation

Talk about a trip down memory lane…  Another excellent blog post yesterday by my friend and fellow Babson College alum, Sameer Patel, snapped me back a few years and gave me that spine tingling sense of deja vu.

Sameer wrote about how the market for Enterprise 2.0 software may evolve much the same way the enterprise portal software market did nearly a decade ago. I remember the consolidation of the portal market very well, having actively shaped and tracked it daily as an analyst and consultant. I would be thrilled if the E2.0 software market followed a similar, but somewhat different direction that the portal market took. Allow me to explain.

When the portal market consolidated in 2002-2003, some cash-starved vendors simply went out of business. However, many others were acquired for their technology, which was then integrated into other enterprise software offerings. Portal code became the UI layer of many enterprise software applications and was also used as a data and information aggregation and personalization method in those applications.

I believe that much of the functionality we see in Enterprise 2.0 software today will eventually be integrated into other enterprise applications. In fact, I would not be surprised to see that beginning to happen in 2010, as the effects of the recession continue to gnaw at the business climate, making it more difficult for many vendors of stand-alone E2.0 software tools and applications to survive, much less grow.

I hope that the difference between the historical integration of portal technology and the coming integration of E2.0 functionality is one of method. Portal functionality was embedded directly into the code of existing enterprise applications. Enterprise 2.0 functionality should be integrated into other applications as services. Service-based functionality offers the advantage of writing once and using many times.  For example, creating service-based enterprise micro-messaging functionality (e.g. Yammer, Socialcast, Socialtext Signals, etc.) would allow it to be integrated into multiple, existing enterprise applications, rather than being confined to an Enterprise 2.0 software application or suite.

The primary goals of writing and deploying social software functionality as services are: 1) to allow enterprise software users to interact with one another without leaving the context in which they are already working, and 2) to preserve the organization’s investment in existing enterprise applications. The first is important from a user productivity and satisfaction standpoint, the second because of its financial benefit.

When the Enterprise 2.0 software market does consolidate, the remaining vendors will be there because they were able to create and sell:

  • a platform that could be extended by developers creating custom solutions for large organizations,
  • a suite that provided a robust, fixed set of functionality that met the common needs of many customers, or
  • a single piece or multiple types of service-based functionality that could be integrated into either other enterprise application vendors’ offerings or deploying organizations’ existing applications and new mashups

What do you think? Will history repeat itself or will the list of Enterprise 2.0 software vendors that survived the impending, inevitable market consolidation consist primarily of those that embraced the service-based functionality model?

Google Wave Protocols: Clearing the Confusion

Today is the long-awaited day when 100,000 lucky individuals receive access to an early, but working, version of Google Wave. I hope I am in those ranks! Like many people, I have been reading about Wave, but have not been able to experience it hands-on

Wave has been a hot topic since it was first shown outside of Google last May. Yet it continues to be quite misunderstood, most likely because it is such an early stage effort and most interested people have not been able to lay hands on the technology. For that very reason, Gilbane Group is presenting a panel entitled Google Wave: Collaboration Revolution or Confusion? at the Gilbane Boston conference, on December 3rd.

The confusion surrounding Wave was highlighted for me yesterday in a Twitter exchange on the topic. It all started innocently enough, when Andy McAfee asked:

Andy1

To which I replied:

Larry1

That statement elicited the following comment from Jevon MacDonald of the Dachis Group:

Jevon1

I am not a technologist. I seek to understand technology well enough that I can explain it in layman’s terms to business people, so they understand how technology can help them achieve their business goals. So I generally avoid getting into deep technical discussions. This time, however, I was pretty sure that I was on solid ground, so the conversation between me and Jevon continued:

Larry2

Larry3

Jevon2

Now, here we are, at the promised blog post. But, how can Jevon and I both be correct? Simple. Google Wave encompasses not one, but several protocols for communication between system components, as illustrated in the figure below.

wave_protocols

Figure 1: Google Wave Protocols (Source: J. Aaron Farr,

The most discussed of these is the Google Wave Federation protocol, which is an extension of the Extensible Messaging and Presence Protocol (XMPP). However, Wave also requires protocols for client-server and robot server- (Web service) Wave server communication. It is also possible, but probably not desirable, for Wave to utilize a client-client protocol.

Jevon was absolutely correct about the XMPP protocol enabling server-server communication in the Google Wave Federation Protocol. The Draft Protocol Specification for the Google Wave Federation Protocol lays out the technical details, which I will not explore here. XMPP provides a reliable mechanism for server-server communication and is a logical choice for that function in Google Wave, because XMPP was originally designed to transmit instant message and presence data.

It turns out that the Google Wave team has not defined a specific protocol to be used in client-server communication. A Google whitepaper entitled Google Wave Data Model and Client-Server Protocol does not mention a specific protocol. The absence of a required or recommended protocol is also confirmed by this blog post. While the Google implementation of Wave does employ HTTP as the client-server protocol, as Jevon stated, it is possible to use XMPP as the basis for client-server communication, as I maintained. ProcessOne demonstrates this use of XMPP in this blog post and demo.

Finally, there is no technical reason that XMPP could not be used to route communications directly from one client to another. However, it would not be desirable to communicate between more than two clients via XMPP. Without a server somewhere in the implementation, Wave would be unable to coordinate message state between multiple clients. In plain English, the Wave clients most likely would not be synchronized, so each would display a different point in the conversation encapsulated in the Wave.

To summarize, Google Wave employs the following protocols:

  • XMPP for server-server communication
  • HTTP for client-server communication in the current Google implementation; XMPP is possible, as demonstrated by ProcessOne
  • HTTP (JSON RPC) for robot server-Wave server communication in the current Google implementation
  • Client-client protocol is not defined, as this mode of communication is most likely not usable in a Wave

I hope this post clarifies the protocols used in the current architecture of Google Wave for you. More importantly, I hope that it highlights just how much additional architectural definition needs to take place before Wave is ready for use by the masses. If I had a second chance to address Andy McAfee’s question, I would unequivocally state that Google Wave is a “concept car” at this point in time.

Postscript: The heretofore mentioned possibilities around XMPP as a client-client protocol are truly revolutionary.
The use of XMPP as the primary communication protocol for the Internet, instead of the currently used HTTP protocol, would create a next generation Internet in which centralized servers would no longer serve as intermediaries between users. Web application architectures, even business models, would be changed. See this post for a more detailed explanation of this vision, which requires each user to run a personal server on their computing device.

Reflections on Gov 2.0 Expo and Summit

O’Reilly’s Gov 2.0 events took place last week. We’ve had some time to think about what the current wave of activity means to buyers and adopters of content technologies.

Both the Expo and Summit programs delivered a deluge of examples of exciting new approaches to connecting consumers of government services with the agencies and organizations that provide them.

  • At the Expo on Sept 8,  25 speakers from organizations like NASA, TSA, US EPA, City of Santa Cruz,  Utah Department of Public Safety, and the US Coast Guard provided five-minute overviews of their 2.0 applications in a sometimes dizzying fast-paced format.
  • Sunlight Labs sponsored an Apps for America challenge that featured finalists who combined federal content available on Data.gov and open source software in some intriguing applications, including DataMasher, which enables you to mash up sources such as stats on numbers of high school graduates and guns per household.
  • The Summit on Sept 9 and 10 featured more applications plus star-status speakers including Aneesh Chopra, the US’s first CTO operating under the Federal Office of Science and Technology Policy; Vinton Cerf, currently VP and evangelist at Google; and Mitch Kapor.

A primary program theme was “government as platform,” with speakers suggesting and debating just what that means. There was much thoughtful discussion, if not consensus. Rather than report, interested readers can search Twitter hash tags #gov20e and #gov20s for comments.

From the first speaker on, we were immediately struck by the rapid pace of change in government action and attitude about content and data sharing. Our baseline for comparison is Gilbane’s last conference on content applications within government and non-profit agencies in June 2007. In presentations and casual conversations with attendees, it was clear that most organizations were operating as silos. There was little sharing or collaboration within and among organizations. Many attendees expressed frustration that this was so. When we asked what could be done to fix the problem, we distinctly remember one person saying that connecting with other content managers just within her own agency would be a huge improvement.

Fast forward a little over two years to last week’s Gov2.0 events. Progress towards internal collaboration, inter-agency data sharing, and two-way interaction between government and citizens is truly remarkable. At least three factors have created a pefect storm of conditions: the current administration’s vision and mandate for open government, broad acceptance of social interaction tools at the personal and organizational level, and technology readiness in the form of open source software that makes it possible to experiment at low cost and risk.

Viewing the events through Gilbane’s content-centric lens, we offer three takeaways:

  • Chopra indicated that the formal Open Government directives to agencies, to be released in several weeks, will include the development of “structured schedules” for making agency data available in machine-readable format. As Tim O’Reilly said while interviewing Chopra, posting “a bunch of PDFs” will not be sufficient for alignment with the directives. As a result, agencies will be accelerating the adoption of XML and the transformation of publishing practices to manage structured content. As a large buyer of content technologies and services, government agencies are market influencers. We will be watching carefully for the impact of Open Government initiatives on the broader landscape for content technologies.
  • There was little mention of the role of content management as a business practice or technology infrastructure. This is not surprising, given that Gov2.0 wasn’t about content management. And while the programs comprised lots of show-and-tell examples, most were very heavy on show and very light on tell. But it does raise a question about how these applications will be managed, governed, and made sustainable and scalable. Add in the point above — that structured content will now be poised for wider adoption, creating demand for XML-aware content management solutions. Look for more discussion as agencies begin to acknowledge their content management challenges.
  • We didn’t hear a single mention of language issues in the sessions we attended. Leaving us to wonder if non-native English speakers who are eligible for government services will be disenfranchised in the move to Open Government.

All in all, thought-provoking, well-executed events. For details, videos of the sessions are available on the Gov2.0 site.

Enterprise 2.0 is Neither a Crock Nor the Entire Solution

Dennis Howlett has once again started a useful and important debate, this time with his Irregular Enterprise blog post entitled Enterprise 2.0: what a crock. While I am sympathetic to some of the thinking he expressed, I felt the need to address one point Dennis raised and a question he asked.

I very much agree with this statement by Dennis:

“Like it or not, large enterprises – the big name brands – have to work in structures and hierarchies…”

However, I strongly disagree with his related contention (“the Big Lie” as he terms it) that:

“Enterprise 2.0 pre-supposes that you can upend hierarchies for the benefit of all.

Dennis also posed a question that probably echoes what many business leaders are asking:

“In the meantime, can someone explain to me the problem Enterprise 2.0 is trying to solve?

Below is the comment that I left on Dennis’ blog. It begins to answer the final question he asked and address my disagreement with his contention that Enterprise 2.0 advocates seek to create anarchy. Is my vision for the co-existence of structured and recombinant organizational and work models clear and understandable? Reasonable and viable? If not, I will expand my thoughts in a future post. Please let me know what you think.

Enterprise 2.0 is trying to solve a couple levels of problems.

From a technology standpoint, E2.0 is addressing the failure of existing enterprise systems to provide users with a way to work through exceptions in defined business processes during their execution. E2.0 technology does this by helping the user identify and communicate with those who can help deal with the issue; it also creates a discoverable record of the solution for someone facing a similar issue in the future.

From a organizational and cultural perspective, E2.0 is defining a way of operating for companies that reflects the way work is actually accomplished — by peer-to-peer interaction, not through command and control hierarchy. Contrary to your view, E2.0 does not pre-suppose the destruction of hierarchy. Correctly implemented (philosophy and technology), E2.0 provides management a view of the company that is complementary to the organization chart.

Addendum: See this previous post for more of my perspective on the relationship of structured and ad hoc methods of working.

« Older posts