The Enterprise 2.0 Conference begins this evening in Boston. Conference organizers indicate that there are approximately 1,500 people registered for the event, which has become the largest one for those interested in the use of Web 2.0 technologies inside business organizations.
The most valuable part of last year’s conference was the case studies on Enterprise 2.0 (E2.0) from early adopter organizations like Lockheed Martin and the Central Intelligence Agency. They presented an early argument for how and why Web 2.0 could be used by businesses.
Here are some things that I anticipate encountering at the E2.0 Conference this year:
- a few more case studies from end user organizations, but not enough to indicate that we’ve reached a tipping point in the E2.0 market
- an acknowledgement that there are still not enough data and case studies to allow us to identify best practices in social software usage
- that entrenched organizational culture remains the single largest obstacle to businesses trying to deploy social software
- a nascent understanding that E2.0 projects must touch specific, cross-organizational business processes in order to drive transformation and provide benefit
- a growing realization that the E2.0 adoption will not accelerate meaningfully until more conservative organizations hear and see how other companies have achieved specific business results and return on investment
- a new awareness that social software and its implementations must include user, process, and tool analytics if we are ever to build a ROI case that is stated in terms of currency, not anecdotes
- that more software vendors that have entered the E2.0 market, attracted by the size of the business opportunity around social software
- a poor opinion of, and potentially some backlash against, Microsoft SharePoint as the foundation of an E2.0 solution; this will be tempered, however, by a belief that SharePoint 2010 will be a game changer and upset the current dynamics of the social software market
- an absence of understanding that social interactions are content-centric and, therefore, that user generated content must be managed in much the same manner as more formal documents
So there are some of my predictions for take-aways from this year’s E2.0 conference. I will publish a post-conference list of what I actually did hear and learn. That should make for some interesting comparison with today’s post; we will learn if my sense of the state of the market was accurate or just plain off.
In the meanwhile, I will be live-tweeting some of the sessions I attend so you can get a sense of what is being discussed at the E2.0 Conference on the fly. You can see my live tweets by following my event feed on Twitter.
MuseGlobal announced a partnership with Specialty Systems, Inc., a company focusing on innovative information systems solutions to Federal, State and Local Government customers. Specialty Systems, Inc. is partnering with MuseGlobal to provide the systems integration expertise to engineer law enforcement and homeland security applications built on MuseGlobal’s MuseConnect, which provides federated search and harvesting technologies, with a library of more than 6,000 pre-built source connectors. The applications resulting from this partnership will incorporate unified information access allowing structured data from database sources; semi-structured data from spreadsheets, forms and XML sources; unstructured data from web sites, documents, email; and rich media such as images, video and audio information to be accessed simultaneously from internal databases and external sources. This information is gathered on the fly, and unified for immediate presentation to the requestor. http://www.specialtysystems.com, http://www.museglobal.com
Positioning content practices as strategic, making business cases that get funding, and selling up within the organization are among the most common challenges presented to Gilbane Group analysts in conversations with users, adopters, and buyers of content technologies. Our advice to clients always includes aligning the target investment with the strategic goals and objectives of the business. By placing content practices and infrastructures directly in the path of promises to customers and shareholders, managers improve their chances of securing financial and sponsorship support. In some cases, they can effect innovative change that not only advances their domain’s capabilities but also results in new value creation for the enterprise.
Gilbane believes that true innovation delivers new value to organizations that are willing to take the risks associated with fundamental, qualitative change. The innovations resulting from FICO’s alignment of product and content development practices with business strategies are object lessons for any organization that needs to compete effectively in global markets.
Download the FICO story here: Innovation3: The FICO Formula for Agile Global Expansion
Listen to the webinar archive here: Innovating for Agility: Global Content Practices at FICO
SDL Tridion announced that it has partnered with Q-go to provide an integrated Natural Language Search engine within SDL Tridion’s web content management platform. The solution provides the online search environment within websites only targeted and relevant search results. Q-go’s Natural Language Search is now accessible from within the SDL Tridion web content management environment. Content editors are able to create model questions in the Q-go component of the SDL Tridion platform. This means that the most common questions pertaining to products and the website itself can be targeted and answered by web content editors, creating streamlined content and vastly increased relevance of searches. The integration also means that only one interface is needed to update the entire website, which can be done anywhere, anytime. You can find more information on the integration at the eXtensions Community of http://www.sdltridionworld.com
Mark Logic Corporation released the MarkLogic Toolkit for Excel. This new offering provides users a free way to integrate Microsoft Office Excel 2007 with MarkLogic Server. Earlier this year, Mark Logic delivered a Toolkit for Word and a Connector for SharePoint. Together, these offerings allow users to extend the functionality of Microsoft Office products and build applications leveraging the native document format, Office Open XML (OOXML). Distributed under an open source model, MarkLogic Toolkit for Excel comes with an Excel add-in that allows users to deploy information applications into Excel, comprehensive libraries for managing and manipulating Excel data, and a sample application that leverages best practices. The MarkLogic Toolkit for Excel offers greater search functionality, allowing organizations to search across their Excel files for worksheets, cells, and formulas. Search results can be imported directly into the workbooks that users are actively authoring. Workbooks, worksheets, formulas, and cells can be exported directly from active Excel documents to MarkLogic Server for immediate use by queries and applications. The Toolkit for Excel allows customers to easily create new Excel workbooks from existing XML documents. Users can now manipulate and re-use workbooks stored in the repository with a built-in XQuery library. For instance, a financial services firm can replace the manual process of cutting-and-pasting information from XBRL documents to create reports in Excel with an automated system. Utilizing Toolkit for Excel, this streamlined process extracts relevant sections of XBRL reports, combines them, and saves them as an Excel file. The Toolkit also allows users to add and edit multiple custom metadata documents across workbooks. This improves the ability for users to discover and reuse information contained in Excel spreadsheets. To download MarkLogic Toolkit for Excel, visit the Mark Logic Developer Workshop located at http://developer.marklogic.com/code/, http://www.marklogic.com
When thinking about some enterprise search use cases that require planning and implementation, presentation of search results is not often high on the list of design considerations. Learning about a new layer of software called Documill from CEO and founder, Mika Könnölä, caused me to reflect on possible applications in which his software would be a benefit.
There is one aspect of search output (results) that always makes an impression when I search. Sometimes the display is clear and obvious and other times the first thing that pops into my mind is “what the heck am I looking at” or “why did this stuff appear?” In most cases, no matter how relevant the content may end up being to my query, I usually have to plow through a lot (could be dozens) of content pieces to confirm the validity or usefulness of what is retrieved.
Admittedly, much of my searching is research or helping with a client’s intranet implementation, not just looking for a quick answer, a fact or specific document. When I am in the mode for what I call “quick and dirty” search, I can almost always frame the search statement to get the exact result I want very quickly. But when I am trying to learn about a topic new to me, broaden my understanding or collect an exhaustive corpus of material for research, sifting and validating dozens of documents by opening each and then searching within the text for the piece of the content that satisfied the query is both tedious and annoyingly slow.
That is where Documill could enrich my experience considerably for it can be layered on any number of enterprise search engines to present results in the form of precise thumbnails that show where in a document the query criterion/criteria is located. In their own words, “it enhances traditional search engine result list with graphically accurate presentation of the content.”
Here are some ideas for its application:
- In an application developed to find specific documents from among thousands that are very similar (e.g. invoices, engineering specifications), wouldn’t it be great to see only a dozen, already opened, pages to the correct location where the data matches the query?
- In an application of 10s of thousands of legacy documents, OCRed for metadata extraction displayable as PDFs, wouldn’t it be great to have the exact pages of the document that match the search displayed as visual images opened to read in the results page? This is especially important in technical documents of 60-100 pages where the target content might be on page 30 or 50.
- In federated search output, when results may contain many similar documents, the immediate display of just the right pages as images ready for review will be a time-saving blessing.
- In a situation where a large corpus of content contains photographs or graphics, such as newspaper archives, scientific and engineering drawings, an instantaneous visual of the content will sharpen access to just the right documents.
I highly recommend that you ask your search engine solution provider about incorporating Documill into your enterprise search architecture. And, if you have, please share your experiences with me through comments to this post or by reaching out for a conversation.
The article “Accuracy Essential to Success of XBRL Financial Filing Program,” by Eileen Z. Taylor and Matt Shipman, NC State News, June 8, 2009 — has been widely talked about recently in XBRL circles.
The key sentence in the news story about the academic paper states:
“The researchers are concerned that, if the upcoming XBRL filings do not represent a significant improvement from the voluntary reports, stakeholders in the financial community will not have any faith in the XBRL program – and it will be rendered relatively ineffective.”
Wrong on at least two counts. First, to assume that the quality of XBRL submissions in the formal, rule laden, error checking mandatory XBRL program is going to be as error ridden as the sand-box, free for all no rules VFP is flat out wrong. I suggest the authors of the paper read the Edgar filing manual, chapter 6, which details hundreds of rules that must be followed for an XBRL exhibit will be accepted by the system. In other words, almost every error found in the VFP by the researchers will rejected by the SEC and require correction.
Second, validation programs can correct some of the accounting errors introduced into XBRL filings, responsible and knowledgeable humans at filing corporations must review submissions prior to filing. The management team is responsible for the data contained in the XBRL exhibits. The SEC has specifically stated that they expect corporations to have in place an XBRL preparation process that is documented and tested in a similar fashion to other required internal controls. An accounting error on any future XBRL exhibit is an indication that the company does not have sufficient internal controls in place.
No, I’m not expecting the startup to be perfect. However, I do expect XBRL filings to be as accurate or more accurate that existing HTML EDGAR filings.