Microsoft Corp. laid out the next phase in its strategy for online services, offering a road map for new offerings that synthesize client, server and services software. Microsoft plans to deliver a variety of solutions during the coming months under two families of service offerings– “Live” and “Online.” “Live” services from Microsoft are designed primarily for individuals, business end-users and virtual work groups. These services emphasize ease of use, simplicity of access and flexibility, and are suited for situations where people either don’t have access to professional technical expertise or don’t require high levels of system management. “Online” services are for organizations with more advanced IT needs where power and flexibility are critical. Online services from Microsoft give businesses the ability to control access to data, manage users, apply business and compliance policy, and meet availability standards. Microsoft is providing business customers with the flexibility to choose between traditional on-premise implementations, services hosted by Microsoft partners and now Online services that reside in Microsoft’s datacenters. Microsoft also unveiled– Microsoft Office Live Workspace, a new Web-based feature of Microsoft Office that lets people access their documents online and share their work with others; Microsoft Exchange Labs, a new research and development program for testing new messaging and unified communications capabilities in high-scale environments; Continued customer and partner support for Microsoft Dynamics Live CRM; The renaming of the Microsoft Office Live hosted small-business service, a service dedicated to addressing small-business pain points, including core IT services and sales and marketing services, to Microsoft Office Live Small Business; and Microsoft BizTalk Services, a building block service that enables developers to build composite applications. Anyone can pre-register for the English language beta of Office Live Workspace at http://www.officelive.com
Category: Collaboration and workplace (Page 53 of 97)
This category is focused on enterprise / workplace collaboration tools and strategies, including office suites, intranets, knowledge management, and enterprise adoption of social networking tools and approaches.
I spent two days last week at the Office 2.0 conference organized by Ismael Ghalimi. The first thing to say about it is that it is truly amazing what Ismael can put together in 6 weeks. As someone who has organized 60-70 conferences, my amazement and respect for what Ismael accomplished, while not unique, is probably more pronounced than others’.
What is “Office 2.0”? As far as I could tell the consensus in the opening panel “The Future of Work” (and in other sessions) was that it referred to any office-in-the-cloud tools, including but not limited to replications of Microsoft Office.
I would say “Office 2.0” is differentiated from “Web 2.0” by having mainly a business focus, and is differentiated from “Enterprise 2.0”, at least in terms of this event, by being more about the technology than the effects of its deployment on enterprise practices. There was some gentle push and pull between Microsoft and Google on the relative importance IT/workflow/regulations versus end-user/real-time-collaboration. When pushed on what they would be adding to future work environments, both Microsoft and SAP stressed the importance of business social networks.
Though not a business social network, in spite of a growing number of professionals using it that way, Facebook was discussed throughout the event. There was much hand-wringing and disagreement over whether people would combine their personal and professional activities, contacts, and information for the world to see. I find it hard to fathom, but it is clear that there are a number of people who are happy and eager to do this. However, just as we’ve said about enterprise blogging, it is important to separate the technology from the way it is used, and there is a big difference between using a tool with social computing-like functionality inside a firewall, and the way people use Facebook. I don’t think there is any doubt that social-computing technology has a large and important role to play in enterprises. Note however that the Facebook generation does not necessarily agree!
Ismael gave an in-depth presentation on his exclusive use of “Office 2.0” tools for organizing and producing the conference. This was a fascinating case study. I have to say that after hearing about Ismael’s experience I don’t think we are quite ready to try this at home, mainly because of the integration issues. We will look at some of the individual tools though. In fact, as Ismael warns, integration is in general the main gotcha for enterprise use of Office 2.0 technology, both between the new tools and between Office 2.0 tools and existing enterprise applications. Ismael describes the event and its organization as an experiment, given what was learned, it was surely a successful experiment.
(See some the the announcements from Office 2.0 at:
Drew Robb has written an excellent article that reflects new thinking about the convergence of older and emerging technologies. In Search Converging with Business Intelligence for CRM Daily.com on August 28, 2007 I see similarities to my own way of viewing what is possible with newer search tools. Be sure to read it.
In addition to enterprise search, I actively follow enterprise knowledge management. It has been much debated because of confusion about its links to inappropriate technologies and well-intentioned but costly and failed initiatives in many organizations. But, in spite of rumors about its death, KM will remain a boundless frontier of opportunity. At its best, it leverages collaborative and sharing practices to maximize the value of organizations’ discoveries, developments and learning using innovative and often simple practices that work because they suit a particular culture’s way of operating.
Popular writings about business and technology innovation, plus tools and techniques for collaboration and sharing abound. 2007 is surely the year when search has come to dominate the technology landscape as vendors in BI, text mining and text analytics, data management, and countless semantic and Web 2.0 entrants vie to add refinements, and conversely search integrates features from those technologies.
In August I commented on search offerings that have made a point of highlighting their “Sharepoint” connectivity. Similarly, many products are adding claims for exploiting emails. I have long assumed that email should be part of the search engine crawling and indexing mix for any intranet. Given email structure, it seems to have more useful and usable metadata than a lot of other content. Social network analysis tools have been terrific at revealing fascinating relationships and internal communications within organizations, especially in the discovery area as emails have been a source for exploring questionable business practices in legal proceedings.
More sophisticated analytic and semantic techniques for exploiting concepts in content give hints about how technologies can integrate content by mapping experts to the expertise they contribute, even when it is scattered throughout their work, including emails where so many nuggets may reside. An area for development would correlate nuggets of knowledge in emails to reveal hidden and latent expertise, pointing to other content an individual has produced using search with BI and analytics algorithms.
Maybe I’m overreaching but I suspect that a lot of experts may not be sufficiently motivated, disciplined or expected to aggregate their small but useful contributions into more valuable knowledge. Regardless of the reasons for that failure, much could be revealed with the right blending of search, indexing, analytics and business intelligence technologies. The components already exist but implementation to get desired results are not necessarily easy to deploy. A truly innovative expertise exploitation engine would be a knowledge engine of note, able to synthesize new knowledge in unique and interesting ways. Historically, much has been made about the role of serendipity in the “search for truth” and “quest for knowledge.” With the aid of enhanced search technologies to blend any or many expert nuggets, a lot more serendipity might happen.
IBM released Notes 8 and Domino 8 earlier this month — two years in development and “the industry’s first enterprise collaboration solution largely designed with input from its customers.” IBM has devoted a lot of its efforts towards creating an integrated user experience — messages, calendar entries, file folders, and queries to business applications can all appear within a single, tiled window. With customized sidebars and tool bars, application developers can add “peripheral vision” to the user experience, and integrate a variety of plug-ins. While this user interface style is hardly revolutionary, it does cut down on window-clutter and will go a long way towards improving the usability of complex application environments.
IBM has also introduced “message conversations” into Notes email. Rather than messages being displayed as discrete items, they are concatinated into their discussion threads — with the root message and all the replies captured in a single list. This reduces Inbox clutter — 150 messages (the average daily total for a “typical” Notes user) can be reduced to eight or ten threads.
For organizations that made the Notes investment some years ago, there’s no need to consider alternatives or doubt IBM’s commitment to it’s core collaboration platform. Like the mainframes of an earlier computing era, Notes remains a solid messaging platform with integrated calendaring and contacts. It continues to serve as a development environment for ad hoc (workgroup-level) applications.
But I wonder about the growth opportunities for Notes. Many of us are quite comfortable with the “traditional” business activities engendered by this latest version — sending and receiviing messages, scheduling and attending meetings, contacting people. Yet when we have so much information readily accessable at our fingertips, we are continually looking for new metaphors for doing work — bringing people together over the network, restructuring business processes, improving decision making. More is at stake than simply “reducing clutter.” We need to focus as much on the “collaboration services” accessible within the network as on the quality of the user experience itself.
Sharepoint repositories are a prime content target for most search engines in the enterprise search arena, judging from the number of announcements I’ve previewed from search vendors in the last six month. This list is long and growing (Names link to press releases or product pages for Sharepoint search enabling):
- Autonomy
- Coveo
- dtSearch
- FAST
- ISYS
- Longitude from BA-Insight
- Ontolica from Mondosoft
- OpenText
- Oracle
- Recommind
- Schemalogic
- Vivisimo
- X1
- … and surely more I’ve missed
Almost a year ago I began using a pre-MOSS version of Sharepoint to collect documents for a team activity. Ironically, the project was the selection, acquisition, implementation of a (non-Sharepoint) content management system to manage a corporate intranet, extranet, and hosted public Web site. The version of Sharepoint that was “set up” for me was strictly out of the box. Not being a development, I was still able to muddle my way through setting up the site, established users, posting announcements and categories of content to which I uploaded about fifty or sixty documents.
The most annoying discovery was the lack of a default search option. Later updating to MOSS solved the problem but at the time it was a huge aggravation. Because I could not guarantee a search option would appear soon enough, I had to painstakingly create titles with dates in order to give team members a contextual description as they would browse the site. Some of the documents I wanted to share were published papers and reviews of products. Dates were not too relevant for those, so I “enhanced” the titles with my own notations to help the finders select what they needed.
These silly “homemade” solutions are not uncommon when a tool does not anticipate how we would want to be able to use it. They persist as ways to handle our information storage and retrieval challenges. Since the beginning of time humans have devised ways to store things that they might want to re-use at some point in the future. Organizing for findability is an art as much at it is science. Information science only takes one so far in establishing the organizing criteria and assigning those criteria to content. Search engines that rely strictly on the author’s language will leave a lot of relevant content on the shelf for the same reasons as using Dewey Decimal classification without the complementary card catalog of subject topics. The better search engines exploit every structured piece of data or tagged content associated with a document, and that includes all the surrounding metadata assigned by “categorizers.” Categorizers might be artful human indexers or automated processes. Search engines with highly refined, intelligent categorizers to enable semantically rich finding experiences bring even more sophistication to the search experience.
But back to Sharepoint, which does have an embedded search option now, I’ve heard more than one expert comment on the likelihood that it will not be the “search” of choice for Sharepoint. That is why we have so many search options scrambling to promote their own Sharepoint search. This is probably because the organizing framework around contributing content to Sharepoint is so loosey goosey that an aggregation of many Sharepoint sites across the organization will be just what we’ve experienced with all these other homegrown systems – a dump full of idiosyncratic organizing tricks.
What you want to do, thoughtfully, is assess whether the search engine you need will share only Sharepoint repositories OR both structured and unstructured repositories across a much larger domain of types of content and applications. It will be interesting to evaluate the options that are out there for searching Sharepoint gold mines. Key questions: Is a product targeting only Sharepoint sites or diverse content? How will content across many types of repositories be aggregated and reflected organized results displays? How will the security models of the various repositories interact with the search engine? Answering these three questions first will quickly narrow your list of candidates for enterprise search.
Now here’s an interesting tidbit from the BBC, courtesy of my daughter (who’s a graduate student in London): Wikipedia ‘shows CIA page edits.’ It seems that staffers at the CIA, the Democratic National Campaign Committee, the Vatican, and many other well known institutions (who may be trying to remain nameless) have been ‘caught’ sprucing up various wikipedia articles. (Well of course this is a tarty British take on the matter!)
And the secret sauce that pulls back the curtain? Revealed at the end of the article, a simple mashup that links the IP addresses of contributors to an article (obtained through the “history” page) with a directory of organizations owning IP addresses. Both are publicly available. The results are hardly surprising.
The point is that when information is so widely and freely available, we have to begin to worry about the sources of information and how it is presented. There’s not a lot of anonymity on the public web — and quite possibly this is a good thing. But building community also includes notions of trust, expertise, and terms of reference. For example, when starting eBay, Pierre Omidyar came up with the notion of “rate the buyer” and “rate the seller” as a way of organically building trust within the community of eBayers . . . and the rest is history.
I hate to admit it but perhaps Ronald Regan said it the best. “Trust but verify.” What’s interesting is that mashing-up sources and IP addresses provides a whole new dimension to verification. I wonder what else is possible? Let’s start a discussion — comments?
Movable Type announced the release of Movable Type 4.0. “This is the biggest release of MT ever, a complete redesign of both the front end information architecture and the back end scaling infrastructure.” Movable Type 4 has a broad set of new capabilities, including: a redesigned user interface, more and better plugins, built in support for OpenID, community features, ability to aggregate content from multiple blogs, new support for standalone pages in addition to blog entries, “content management” features, smarter archiving (e.g., by author), more robust templates, and more. They also announced an upcoming open source version. http://www.movabletype.com/blog/2007/08/presenting-movable-type-40.html, http://www.movabletype.com/
It’s been a rough few weeks for infrastructure.
Of course the collapse of the I-35 bridge over the Mississippi in Minneapolis last Wednesday is on all of our minds – how could the inspections fail and the road fall down? Significantly, there’s some videos that capture the moment, which hopefully will provide clues for determining the cause.
Closer to home, we had an eight-hour traffic jam on the I-93 loop in Braintree (a major highway south of Boston) a week ago Monday. A storm grate was thrown loose by a passing truck at the start of the morning rush, and landed on a near-by car. (Fortunately the driver survived.) Reportedly, the Massachusetts State Highway Department spent the rest of the day checking and welding shut all the grates on that highway. The next day, the same loose storm drain problem cropped up on a major road in Newton, near where I live. This time motorists were asked to dial a special code from their cell phones to report problems.
And then this morning New Yorkers awoke to a monsoon and a flooded mass transit system. The official M.T.A. web site could not keep up with the requests for information, and crashed when it was needed most.
You’ve gotta hand it to those hardy folks (and the New York Times) for that snarky, Big Apple attitude. Here’re a few priceless ones that I gleaned from NYTimes.Com during the day.
“Our transit system is not only frail but if it’s this vulnerable to rain attacks, then how vulnerable is it to terrorist attacks?”
“I walked from the Upper West Side all the way to work in Midtown, but thanks to Starbucks was able to stop in and cool myself every four blocks or so.”
(Now there’s somebody with brand loyalty!)
“And only the rats had no transportation problems.”
All this content about our physical infrastructure (user generated and otherwise) has the potential to bring social computing to a whole new level.
This got me thinking about the role of collaboration technologies for supporting our physical infrastructure. It’s great to be able to talk back — and let off a little steam. It’s even better to be able to call-in, and tell the authorities about the problem before there’s another horrible accident. But what else is possible? Could the bridge inspectors in Minneapolis have shared their observation reports, measurements, and perhaps photographs of the bridge’s structure over the past few inspection cycles, and had some semi-automated ways to detect the problems before the disaster? Unfortunately we’ll never know.
While I certainly don’t have it all figured out, I can begin to see some bread crumbs towards the workable solution we all want and need. We can no longer rely on human intelligence alone. Our worlds are much too complex and interdependent. We need to augment our understandings, and our abilities to take actions, by a variety of automated, concent-centric tools, such as semantic technologies. (With my colleagues Lynda Moulton and Frank Gilbane, we’re picking up coverage of this area, the ability to inject “meaning” and “context” into an enterprise environment. Be sure to check out the semantic technologies track at our upcoming conference in November.)
I’ve seen a couple of promising developments this month. SchemaLogic is finally reporting some progress in the publishing space, enabling publishers such as Associated Press to automatically repurpose content by synchronizing tags and managing metadata schemas. While pretty geeky, this is very neat! Now we need to see how this approach to managing semantics within the enterprise will impact collaboration and social computing.
Then project and portfolio management (PPM) systems — heretofore heavyweight (often mainframe) applications that are used to track resources for complex, engineering-driven projects — are being redeployed as Web 2.0 environments. In particular,eProject is now transforming its Web-based PPM environment into a broader collaborative tools suite. Seeking to capitalize on it’s expanded mission of bringing a PPM model to the Web, eProject’s also renaming itself in the process.
Where do these bread-crumbs lead? As a first step, we need to focus on how our collaboration infrastructure (fueled by our information architecture) can augment the work of people responsible for our physical infrastructure (ourselves included). At the end of the day, we need to be able to rely on this collaboration infrastructure to help us sense and respond to the challenges of simply getting from one place to another.