Curated for content, computing, data, information, and digital experience professionals

Category: Computing & data (Page 87 of 94)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Marketing strategy versus technology – should be a virtuous circle

Scott Brinker has another must-read post. I excerpt parts of his post below so I can expand on it a bit but you should read his full post along with the comments.

In his post Scott explains he is responding to statements made in a podcast by Joe Pulizzi and Robert Rose. After linking to the podcast and agreeing with much of what they say Scott makes three points:

  1. “Marketing technology is not just about efficiency — it’s about experiences.
  2. The relationship between strategy and technology is circular, not linear.
  3. Marketers cannot abdicate their responsibility to understand technology.”

and mentions the one quote he really disagrees with (emphasis is Scott’s):

“Figure out your process first. And then get aligned with your internal IT guys to figure out what it is you exactly need to facilitate. Because that’s the only thing that technology will ever, ever do. The only thing technology will ever do is facilitate a process that you have more efficiently. That’s all it’s ever going to do.”

That is a pretty strong recommendation for option A in Scott’s illustration below.

strategy technology circular

I want to make three points:

The fact that the relationship between technology and strategy is circular – that they have to inform, influence, and advance with each other – is true of all enterprise applications and for all functions and has always been true.

  • If you replace “technology” with “data” or “big data” or “analytics” the points that Scott makes are equally valid. (For a different take on this see Big data and decision making: data vs intuition.)
  • Technology is not just a set of product features. The features are possible because of creative combinations of underlying software concepts, programming languages, data structures, and architectures. Without some understanding of the underlying fundamentals it is natural to think product features define software capabilities and thus to limit insight into strategy possibilities. Marketers (or other professionals) with little to no technical background can compare feature sets and build strategies that match, or build strategies and look for the set of already existing product features to match.
  • Each of these illustrate what we might call the bad kind of circularity (as we mean when we call an argument circular) and they handicap innovation. The good kind of circularity is a strategy/technology dialog of what ifs, informed by what might be possible, not by what is already known.

It is both natural and common for consultants to overemphasize option A, because way too often option B is overemphasized at the expense of option A by both their customers and technology vendors. Good consultants spend a lot of time and effort helping customers overcome an under-appreciation or political deprecation of the importance of strategy. But all of us need to be careful not to suggest either linear false choice.

Content and User Experience Design for the Internet of Smart Things – Gilbane Conference Spotlight

There are many reasons to be excited about the Internet of Things, a content channel is not usually considered one of them. In fact, the mere suggestion of a need to support one more digital channel is enough to cause many execs to consider a career change, never mind n additional channels, and n is the future.

Many internet things don’t and won’t need to prepare content for direct human consumption, but many will – cars and watches and glasses are just the beginning. The variety of form factors, display technologies, and application requirements will present challenges in user experience design, content strategies, content management and data integration. The session we are spotlighting today will focus on the user experience design challenges, of which there are many.

T7. Have You Talked To Your Refrigerator Today? Content and User Experience Design for the Internet of Smart Things

Wednesday, December, 4: 2:00 p.m. – 3:20 p.m. – The Westin Boston Waterfront

The web is dead. Or is it evolving into the Internet of things? If so, how can we harness the emergence of smart and app-enabled devices, appliances, homes, cars and offices into the digital gene pool? Four senior executives in experience planning and strategy, technology, creative and user experience will provide a point of view on the Internet of smart things and answer key questions, including the following, using real world examples:

  • How can your smart washing machine, refrigerator and dishwasher be mated with intelligent apps, CRM, and dynamic content management systems to create real-time marketing and ecommerce experiences?
  • What happens to content strategy and management as app-enabled “playthings” become essential to your work and family life?
  • What do we do as video baby monitors become digital caretaking, developmental tracking, medical monitoring, and product ordering parent-bots?
  • What is the optimal customer experience for using voice to simultaneously integrate and operate your car, your mechanic, your GPS, your iPod, your radio, your tablet and your smartphone?
  • What best practices are needed for creative designers, content strategists, marketers, and user experience designers to create engaging Internet of smart things experiences?
Moderator:
Doug Bolin, Associate Director, User Experience Design, Digitas
Panelists:
Michael Vessella, Vice President, Director, Experience Design, Digitas
Michael Daitch, Vice President, Group Creative Director, Digitas
Adam Buhler, Vice President, Creative Technology / Labs / Mobile, Digitas

 

What Experts Say about Enterprise Search: Content, Interface Design and User Needs

This recap might have the ring of an old news story but these clips are worth repeating until more enterprises get serious about making search work for them, instead of allowing search to become an expensive venture in frustration. Enterprise Search Europe, May 14-16, 2013, was a small meeting with a large punch. My only regret is that the audience did not include enough business and content managers. I can only imagine that the predominant audience members, IT folks, are frustrated that the people whose support they need for search to succeed were not in attendance to hear the messages.

Here are just a few of the key points that business managers and those who “own” search budgets need to hear.

On Day 1 I attended a workshop presented by Tony Russell-Rose [Managing Director, UXLabs and co-author of Designing the Search Experience, also at City University London], Search Interface Design. While many experts talk about the two top priorities for search success, recall (all relevant results returned) and precision (all results returned are relevant), they usually fail to acknowledge a hard truth. We all want “the whole truth and nothing but the truth,” but as Tony pointed out, we can’t have both. He went on to offer this general guidance on the subject; recall in highly regulated or risk intensive business is most important but in e-commerce we tend to favor precision. I would add that in enterprises that have to manage risk and sell products, there is a place for two types of search where priorities vary depending on the business purpose. My takeaway: universal, all-in-one search implementations across an enterprise will leave most users disappointed. It’s time to acknowledge the need for different types of implementations, depending on need and audience.

Ed Dale [Digital Platforms Product Manager, Ernst & Young (USA)] gave a highly pragmatic keynote at the meeting opening, The Six Drivers for Search Quality. The overarching theme was that search rests on content. He went on to describe the Ernst & Young drivers: the right content, optimized for search, constant tuning for optimal results, attention to a user interface that is effective for a user-type, attention to user needs, consistency in function and design. Ed closed with this guidance: develop your own business drivers based on issues that are important to users. Based on these and the company’s drivers, focus your efforts, remembering that you are not your users.

The Language of Discovery: A Toolkit for Designing Big Data Interfaces and Interactions was presented by Joseph Lamantia, [UX Lead: Discovery Products and Services, Oracle Endeca]. He shared the idea that discovery is the ability to understand data, and the importance of not treating data, by itself, as having value without achieving discovery. Discovery was defined as something you have seen, found, and made sense of in order to derive insight. It is achieved by grasping or understanding meaning and significance. What I found most interesting was the discussion of modes of searching that have grown out of a number of research efforts. Begin with slide 44, “Mediated Sense making” to learn the precursors that lead into his “modes” description. When considering search for the needy user, this discussion is especially important. We all discover and learn in different ways and the “mode” topic highlights the multitude of options to contemplate. [NOTE: Don’t overlook Joe’s commentary that accompanies the slides at the bottom of the SlideShare.]

Joe was followed by Tyler Tate, [Cofounder, TwigKit] on Information Wayfinding: A New Era of Discovery. He asked the audience to consider this question, “Are you facilitating the end-user throughout all stages of the information seeking process?” The stages are: initiation > selection > exploration > formulation > collection > action. This is a key point for those most involved in user interface design and content managers thinking about facet vocabulary and sorting results.

Steve Arnold [Arnold IT], always brings a “call to reality” aspect to his presentations and Big Data vs. Search was no different. On “Big Data” a couple of key points stick out, “More Data” is not just more data; it is different. As soon as we begin trying to “manage” it we have to apply methods and technologies to reduce it to dimensions that search systems can deal with. Search data processing has changed very little for the last 50 years and processing constraints limit indexing capabilities across these super large sets. There are great opportunities for creating management tools (e.g. analytics) for big data in order to optimize search algorithms, and make the systems more affordable and usable. Among Arnold’s observations was the incessant push to eliminate humans, getting away from techniques and methods [to enhance content] that work and replacing them with technology. He noted that all the camera and surveillance systems in Boston did not work to stop the Marathon bombers but people in the situation did limit casualties through quick medical intervention and providing descriptions of suspicious people who turned out to be the principal suspects. People must still be closely involved for search to succeed, regardless of the technology.

SharePoint lurks in every session at information technology conferences and this meeting was no exception. Although I was not in the room to hear the presentation, I found these slides from Agnes Molnar [International SharePoint Consultant, ECM & Search Expert, MVP] Search Based Applications with SharePoint 2013 to be among the most direct and succinct explanation of when SharePoint makes sense. It nicely explains where SharePoint fits in the enterprise search eco-landscape. Thanks to Agnes for the clarity of her presentation.

A rapid fire panel on “Trends and Opportunities” moderated by Allen Peltz-Sharpe [Research Director for Content Management & Collaboration, 451 Research] included Charlie Hull [Founder of Flax], Dan Lee of Artirix, Kristian Norling of Findwise (see Findwise survey results), Eric Pugh of OpenSource Connections and Rene Kreigler an independent search consultant. Among the key points offered by the panelists were:

  • There is a lot to accomplish to make enterprise search work after installing the search engine. When it comes to implementation and tuning there are often significant gaps in products and available tools to make search work well with other technologies.
  • Search can be leveraged to find signals of what is needed to improve the search experience.
  • Search as an enterprise application is “not sexy” and does not inspire business managers to support it enthusiastically. Its potential value and sustainability is not well understood, so managers do not view it as something that will increase their own importance.
  • Open source adoption is growing but does face challenges. VC backed companies in that arena will have a struggle to generate enough revenue to make VCs happy. The committer community is dominated by a single firm and that may weaken the staying power of other search (Lucene, Solr) open source committers.

A presentation late in the program by Kara Pernice, Managing Director of NN/g, Nielsen Norman Group, positioned the design of an intranet as a key element in making search compelling. Her insights reflect two decades of “Eyetracking Web Usability” done with Jakob Nielsen, and how that research applies for an intranet. Intranet Search Usability was the theme and Kara’s observations were keenly relevant to the audience.

Not the least of my three days at the meeting were side discussions with Valentin Richter CEO of Raytion, Iain Fletcher of Search Technologies, Martin Rugfelt of Expertmaker, Benoit Leclerc of Coveo, and Steve Andrews an advisor to Q-Sensei. These contributed many ideas on the state of enterprise search. I left the meeting with the overarching sense that enterprise leadership needs to be sold on the benefits for sustaining a search team as part of the information ecosystem. Bringing an understanding of search as not just being a technological, plug & play product and a “one-off” project is the challenge. Messaging is not getting through effectively. We need strong and clear business voices to make the case; the signals are too diffuse and that makes them weak. My take is that messages from search vendors all have valid points-of-view but when they are combined with too many other topics (e.g. “big data,” “analytics,” “open source,” SharePoint, “cloud computing”) basic concepts of what search is and where it belongs in the enterprise gets lost.

What big companies are doing with big data today

The Economist has been running a conference largely focused on Big Data for three years. I wasn’t able to make it this year, but the program looks like it is still an excellent event for executives to get their hands around the strategic value, and the reality, of existing big data initiatives from a trusted source. Last month’s conference, The Economist’s Ideas Economy: Information Forum 2013, included an 11 minute introduction to a panel on what large companies are currently doing and on how boardrooms are looking at big data today that is almost perfect for circulating to c-suites. The presenter is Paul Barth, managing partner at NewVantage Partners.

Thanks to Gil Press for pointing to the video on his What’s The Big Data? blog.

The Analyst’s Lament: Big Data Hype Obscures Data Management Problems in the Enterprise

I’ve been a market and product analyst for large companies. I realize that my experiences are a sample of one, and that I can’t speak for my analyst peers. But I suspect some of them would nod in recognition when I say that in those roles, I spent only a fraction of my time in these analyst roles actually conducting data analysis.  With the increase in press that Big Data has received, I started seeing a major gap between what I was reading about enterprise data trends, and my actual experiences working with enterprise data.

A more accurate description of what I spent large amounts of time doing was data hunting. And data gathering, and data cleaning, and data organizing, and data checking.  I spent many hours trying to find the right people in various departments who “owned” different data sources. I then had to get locate definitions (if they existed – this was hit or miss) and find out what quirks the data had so I could clean it without losing records (for example, which of the many data fields with the word “revenue” in it would actually give me revenue). In several cases I found myself begging fellow overworked colleagues to please, please, pull the data I needed from that database which I in theory should have had access to but was shut out of due to multiple layers of bureaucracy and overall cluelessness as to what data lived where within the organization.

Part of me thought, “Well, this is the lot of an analyst in a large company. It is the job.” And this was confirmed by other more senior managers – all on the business side, not in the IT side – who asserted that, yes, being a data hunter/gatherer/cleaner/organizer/checker was indeed my job. But another part of me was thinking, “These are all necessary tasks in dealing with data. I will always need to clean data no matter what. I will need to do some formatting and re-checking to make sure what I have is correct. But should this be taking up such a large chunk of my time? This is not the best way I can add value here. There are too many business questions I could potentially be trying to help solve; there has got to be a better way.”

So initially I thought, not being an IT professional, that this was an issue of not having the right IT tools. But gradually I came to understand that technology was not the problem. More often than not, I had access to best-in-class CRM systems, database and analytics software, and collaboration tools at my disposal. I had the latest versions of Microsoft Office and a laptop or desktop with decent processing power. I had reliable VPN connectivity when I was working remotely and often a company-supplied mobile smartphone. It was the processes and people that were the biggest barriers to getting the information I needed in order to provide fact-based research that could be used to solve business-critical decisions.

Out of sheer frustration, I started doing some research to see if there was indeed a better way for enterprises to manage their data. Master Data Management (MDM), you’ve been around for over a decade, why haven’t I ever encountered you?  A firm called the Information Difference, a UK-based consultancy which specializes in MDM, argues that too often, decisions about data management and data governance are left solely to the IT department. The business should also be part of any MDM project, and the governance process should be sponsored and led by C-level business management. Talk about “aha” moments.  When I read this, I actually breathed a sigh of relief. It isn’t just me that thinks there has to be a better way to go, so that the not-cheap business and market analysts that enterprises the world over employ can actually spend more of their time solving problems and less time data wrangling!

That’s why when I read the umpteenth article/blog post/tweet about how transformative Big Data is and will be, I cannot help but groan.  Before enterprises begin to think about new ways about structuring and distributing data, they need to do an audit of how existing data is already used within and between different businesses.  In particular, they should consider MDM if that has not already been implemented. There is so much valuable data that already exists in the enterprise, but the business and IT have to actually work together to deploy and communicate about data initiatives. They also need to evaluate if and how enterprise data is being used effectively for business decisions, and if that usage meets compliance and security rules.

I suspect that many senior IT managers know this and agree. I also suspect that getting counterparts in the business to be active and own decisions about enterprise data, and not just think data is an IT issue, can be a challenge. But in the long run, if this doesn’t happen more often, there’s going to be a lot of overpaid, underutilized data analysts out there and missed business opportunities. So if you are an enterprise executive wondering “do I have to worry about this Big Data business?” please take a step back and look at what you already have.  And if you know any seasoned data analysts in your company, maybe even talk to them about what would make them more effective and faster at their job. The answer may be simpler than you think.

Big data and decision making: data vs intuition

There is certainly hype around ‘big data’, as there always has been and always will be about many important technologies or ideas – remember the hype around the Web? Just as annoying is the backlash anti big data hype, typically built around straw men – does anyone actually claim that big data is useful without analysis?

One unfair characterization both sides indulge in involves the role of intuition, which is viewed either as the last lifeline for data-challenged and threatened managers, or as the way real men and women make the smart difficult decisions in the face of too many conflicting statistics.

Robert Carraway, a professor who teaches Quantitative Analysis at UVA’s Darden School of Business, has good news for both sides. In a post on big data and decision making in Forbes, “Meeting the Big Data challenge: Don’t be objective” he argues “that the existence of Big Data and more rational, analytical tools and frameworks places more—not less—weight on the role of intuition.”

Carraway first mentions Corporate Executive Board’s findings that of over 5000 managers 19% were “Visceral decision makers” relying “almost exclusively on intuition.” The rest were more or less evenly split between “Unquestioning empiricists” who rely entirely on analysis and “Informed skeptics … who find some way to balance intuition and analysis.” The assumption of the test and of Carraway was that Informed skeptics had the right approach.

A different study, “Frames, Biases, and Rational Decision-Making in the Human Brain“, at the Institute of Neurology at University College London tested for correlations between the influence of ‘framing bias’ (what it sounds like – making different decisions for the same problem depending on how the problem was framed) and degree of rationality. The study measured which areas of the brain were active using an fMRI and found the activity of the the most rational (least influenced by framing) took place in the prefrontal cortex, where reasoning takes place; the least rational (most influenced by framing / intuition) had activity in the amygdala (home of emotions); and the activity of those in between (“somewhat susceptible to framing, but at times able to overcome it”) in the cingulate cortex, where conflicts are addressed.

It is this last correlation that is suggestive to Carraway, and what he maps to being an informed skeptic. In real life, we have to make decisions without all or enough data, and a predilection for relying on either data or intuition can easily lead us astray. Our decision making benefits by our brain seeing a conflict that calls for skeptical analysis between what the data says and what our intuition is telling us. In other words, intuition is a partner in the dance, and the implication is that it is always in the dance — always has a role.

Big data and all the associated analytical tools provide more ways to find bogus patterns that fit what we are looking for. This makes it easier to find false support for a preconception. So just looking at the facts – just being “objective” – just being “rational” – is less likely to be sufficient.

The way to improve the odds is to introduce conflict – call in the cingulate cortex cavalry. If you have a pre-concieved belief, acknowledge it and and try and refute, rather than support it, with the data.

“the choice of how to analyze Big Data should almost never start with “pick a tool, and use it”. It should invariably start with: pick a belief, and then challenge it. The choice of appropriate analytical tool (and data) should be driven by: what could change my mind?…”

Of course conflict isn’t only possible between intuition and data. It can also be created between different data patterns. Carraway has an earlier related post, “Big Data, Small Bets“, that looks at creating multiple small experiments for big data sets designed to minimize identifying patterns that are either random or not significant.

Thanks to Professor Carraway for elevating the discussion. Read his full post.

Customer experiences, communications, and analytics

three epicenters of innovation in modern marketing
I recently discovered Scott Brinker’s Chief Marketing Technologist blog and recommend it as a useful resource for marketers. The Venn diagram above is from a recent post, 3 epicenters of innovation in modern marketing. It was the Venn diagram that first grabbed my attention because I love Venn diagrams as a communication tool, it reminded me of another Venn diagram well-received at the recent Gilbane Conference, and most of the conference discussions map to someplace in the illustration.

As good as the graphic is on its own, you should read Scott’s post and see what he has to say about the customer experience “revolution”.

Lest you think Scott is a little too blithe in his acceptance of the role of big data, see his The big data bubble in marketing — but a bigger future, where the first half of the (fairly long) post talks about all the hype around big data. But you should read the full post because he is right on target in describing the role of big data in marketing innovation, and in his conclusion that data-driven organizations will need to make use of big data though these data-driven and data-savvy organizations will take some time to build.

So don’t let current real or perceived hype about the role of big data in marketing lead you to discount its importance – it’s a matter of when, not if. “When” is not easy to predict, but will certainly be different depending on an organizations’ resources and ability to deal with complexity, and organizational and infrastructure changes.

Enterprise Search Strategies: Cultivating High Value Domains

At the recent Gilbane Boston Conference I was happy to hear many remarks positioning and defining “Big Data” and the variety of comments. Like so much in the marketing sphere of high tech, answers begin with technology vendors but get refined and parsed by analysts and consultants, who need to set clear expectations about the actual problem domain. It’s a good thing that we have humans to do that defining because even the most advanced semantics would be hard pressed to give you a single useful answer.

I heard Sue Feldman of IDC give a pretty good “working definition” of big data at the Enterprise Search Summit in May, 2012. To paraphrase is was:

  • > 100 TB up to petabytes, OR
  • > 60% growth a year of unstructured and unpredictable content, OR
  • Ultra high streaming content

But we then get into debates about differentiating data from unstructured content when using a phrase like “big data” and applying it to unstructured content, which knowledge strategists like me tend to put into a category of packaged information. But never mind, technology solution providers will continue to come up with catchy buzz phrases to codify the problem they are solving, whether it makes semantic sense or not.

What does this have to do with enterprise search? In short, “findability” is an increasingly heavy lift due to the size and number of content repositories. We want to define quality findability as optimal relevance and recall.

A search technology era ago, publishers, libraries, content management solution providers were focused on human curation of non-database content, and applying controlled vocabulary categories derived from decades of human managed terminology lists. Automated search provided highly structured access interfaces to what we now call unstructured content. Once this model was supplanted by full text retrieval, and new content originated in electronic formats, the proportion of human categorized content to un-categorized content ballooned.

Hundreds of models for automatic categorization have been rolled out to try to stay ahead of the electronic onslaught. The ones that succeed do so mostly because of continued human intervention at some point in the process of making content available to be searched. From human invented search algorithms, to terminology structuring and mapping (taxonomies, thesauri, ontologies, grammar rule bases, etc.), to hybrid machine-human indexing processes, institutions seek ways to find, extract, and deliver value from mountains of content.

This brings me to a pervasive theme from the conferences I have attended this year, the synergies among text mining, text analytics, extractor/transformer/loader (ETL), and search technologies. These are being sought, employed and applied to specific findability issues in select content domains. It appears that the best results are delivered only when these criteria are first met:

  • The business need is well defined, refined and narrowed to a manageable scope. Narrowing scope of information initiatives is the only way to understand results, and gain real insights into what technologies work and don’t work.
  • The domain of content that has high value content is carefully selected. I have long maintained that a significant issue is the amount of redundant information that we pile up across every repository. By demanding that our search tools crawl and index all of it, we are placing an unrealistic burden on search technologies to rank relevance and importance.
  • Apply pre-processing solutions such as text-mining and text analytics to ferret out primary source content and eliminate re-packaged variations that lack added value.
  • Apply pre-processing solutions such as ETL with text mining to assist with content enhancement, by applying consistent metadata that does not have a high semantic threshold but will suffice to answer a large percentage of non-topical inquiries. An example would be to find the “paper” that “Jerry Howe” presented to the “AMA” last year.

Business managers together with IT need to focus on eliminating redundancy by utilizing automation tools to enhance unique and high-value content with consistent metadata, thus creating solutions for special audiences needing information to solve specific business problems. By doing this we save the searcher the most time, while delivering the best answers to make the right business decisions and innovative advances. We need to stop thinking of enterprise search as a “big data,” single engine effort and instead parse it into “right data” solutions for each need.

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑