Curated for content, computing, and digital experience professionals

Category: Computing & data (Page 84 of 90)

Computing and data is a broad category. Our coverage of computing is largely limited to software, and we are mostly focused on unstructured data, semi-structured data, or mixed data that includes structured data.

Topics include computing platforms, analytics, data science, data modeling, database technologies, machine learning / AI, Internet of Things (IoT), blockchain, augmented reality, bots, programming languages, natural language processing applications such as machine translation, and knowledge graphs.

Related categories: Semantic technologies, Web technologies & information standards, and Internet and platforms.

Customer experiences, communications, and analytics

three epicenters of innovation in modern marketing
I recently discovered Scott Brinker’s Chief Marketing Technologist blog and recommend it as a useful resource for marketers. The Venn diagram above is from a recent post, 3 epicenters of innovation in modern marketing. It was the Venn diagram that first grabbed my attention because I love Venn diagrams as a communication tool, it reminded me of another Venn diagram well-received at the recent Gilbane Conference, and most of the conference discussions map to someplace in the illustration.

As good as the graphic is on its own, you should read Scott’s post and see what he has to say about the customer experience “revolution”.

Lest you think Scott is a little too blithe in his acceptance of the role of big data, see his The big data bubble in marketing — but a bigger future, where the first half of the (fairly long) post talks about all the hype around big data. But you should read the full post because he is right on target in describing the role of big data in marketing innovation, and in his conclusion that data-driven organizations will need to make use of big data though these data-driven and data-savvy organizations will take some time to build.

So don’t let current real or perceived hype about the role of big data in marketing lead you to discount its importance – it’s a matter of when, not if. “When” is not easy to predict, but will certainly be different depending on an organizations’ resources and ability to deal with complexity, and organizational and infrastructure changes.

Enterprise Search Strategies: Cultivating High Value Domains

At the recent Gilbane Boston Conference I was happy to hear many remarks positioning and defining “Big Data” and the variety of comments. Like so much in the marketing sphere of high tech, answers begin with technology vendors but get refined and parsed by analysts and consultants, who need to set clear expectations about the actual problem domain. It’s a good thing that we have humans to do that defining because even the most advanced semantics would be hard pressed to give you a single useful answer.

I heard Sue Feldman of IDC give a pretty good “working definition” of big data at the Enterprise Search Summit in May, 2012. To paraphrase is was:

  • > 100 TB up to petabytes, OR
  • > 60% growth a year of unstructured and unpredictable content, OR
  • Ultra high streaming content

But we then get into debates about differentiating data from unstructured content when using a phrase like “big data” and applying it to unstructured content, which knowledge strategists like me tend to put into a category of packaged information. But never mind, technology solution providers will continue to come up with catchy buzz phrases to codify the problem they are solving, whether it makes semantic sense or not.

What does this have to do with enterprise search? In short, “findability” is an increasingly heavy lift due to the size and number of content repositories. We want to define quality findability as optimal relevance and recall.

A search technology era ago, publishers, libraries, content management solution providers were focused on human curation of non-database content, and applying controlled vocabulary categories derived from decades of human managed terminology lists. Automated search provided highly structured access interfaces to what we now call unstructured content. Once this model was supplanted by full text retrieval, and new content originated in electronic formats, the proportion of human categorized content to un-categorized content ballooned.

Hundreds of models for automatic categorization have been rolled out to try to stay ahead of the electronic onslaught. The ones that succeed do so mostly because of continued human intervention at some point in the process of making content available to be searched. From human invented search algorithms, to terminology structuring and mapping (taxonomies, thesauri, ontologies, grammar rule bases, etc.), to hybrid machine-human indexing processes, institutions seek ways to find, extract, and deliver value from mountains of content.

This brings me to a pervasive theme from the conferences I have attended this year, the synergies among text mining, text analytics, extractor/transformer/loader (ETL), and search technologies. These are being sought, employed and applied to specific findability issues in select content domains. It appears that the best results are delivered only when these criteria are first met:

  • The business need is well defined, refined and narrowed to a manageable scope. Narrowing scope of information initiatives is the only way to understand results, and gain real insights into what technologies work and don’t work.
  • The domain of content that has high value content is carefully selected. I have long maintained that a significant issue is the amount of redundant information that we pile up across every repository. By demanding that our search tools crawl and index all of it, we are placing an unrealistic burden on search technologies to rank relevance and importance.
  • Apply pre-processing solutions such as text-mining and text analytics to ferret out primary source content and eliminate re-packaged variations that lack added value.
  • Apply pre-processing solutions such as ETL with text mining to assist with content enhancement, by applying consistent metadata that does not have a high semantic threshold but will suffice to answer a large percentage of non-topical inquiries. An example would be to find the “paper” that “Jerry Howe” presented to the “AMA” last year.

Business managers together with IT need to focus on eliminating redundancy by utilizing automation tools to enhance unique and high-value content with consistent metadata, thus creating solutions for special audiences needing information to solve specific business problems. By doing this we save the searcher the most time, while delivering the best answers to make the right business decisions and innovative advances. We need to stop thinking of enterprise search as a “big data,” single engine effort and instead parse it into “right data” solutions for each need.

Integrating External Data & Enhancing Your Prospects

Most companies with IT account teams and account selling strategies have a database in a CRM system and the company records in that database generally have a wide range of data elements and varying degrees of completeness. Beyond the basic demographic information, some records are more complete than others with regard to providing information that can tell the account team more about the drivers of sales potential. In some cases, this additional data may have been collected by internal staff, in other cases, it may be the result of purchased data from organizations like Harte-Hanks, RainKing, HG Data or any number of custom resources/projects.

There are some other data elements that can be added to your database from freely available resources. These data elements can enhance the company records by showing which companies will provide better opportunities. One simple example we use in The Global 5000 database is the number of employees that have a LinkedIn profile. This may be an indicator that companies with a high percentage of social media users are more likely to purchase or use certain online services. That data is free to use. Obviously, that indicator does not work for every organization and each company needs to test the data correlation between customers and the attributes, environment or product usage.

Other free and interesting data can be found in government filings. For example, any firm with benefit and 401k plans must file federal funds and that filing data is available from the US government. A quick scan of the web site data.gov  shows a number of options and data sets available for download and integration into your prospect database. The National Weather Center, for example, provides a number of specific long term contracts which can be helpful for anyone selling to the agriculture market.

There are a number things that need to be considered when importing and appending or modeling external data. Some of the key aspects include:

  • A match code or record identifier whereby external records can be matched to your internal company records. Many systems use the DUNS number from D&B rather than trying to match on company names which can have too many variations to be useful.
  • The CRM record level needs to be established so that the organization is focused on companies at a local entity level or at the corporate HQ level.  For example, if your are selling multi-national network services, having lots of site recrods is probably not helpful when you most likely have to sell at the corporate level.
  • De-dupe your existing customers. When acquiring and integrating an external file — those external sources won’t know your customer set and you will likely be importing data about your existing customers. If you are going to turn around and send this new, enhanced data to your team, it makes sense to identify or remove existing clients from that effort so that your organization is not marketing to them all over again.
  • Identifying the key drivers that turn the vast sea of companies into prospects and then into clients will provide a solid list of key data attributes that can be used to append to existing records.  For example, these drivers may include elements such as revenue growth, productivity measures such as revenue per employee, credit ratings, multiple locations or selected industries.

In this era of marketing sophistication with increasing ‘tons’ of Big Data being available and sophisticated analytical tools coming to market every company has the opportunity to enhance their internal data by integrating external data and going to market armed with more insight than ever before.

Learn more about more the Global 5000 database

 

Frank Gilbane interview on Big Data

Big data is something we cover at our conference and this puzzles some given our audience of content managers, digital marketers, and IT, so I posted Why Big Data is important to Gilbane Conference attendees on gilbane.com to explain why. In the post I also included a list of the presentations at Gilbane Boston that address big data. We don’t have a dedicated track for big data at the conference but there are six presentations including a keynote.

I was also interviewed on the CMS-Connected internet news program about big data the same week, which gave me an opportunity to answer some additional questions about big data and its relevance to the same kind of  audience. There is still a lot more to say about this, but the post and the interview combined cover the basics.

The CMS-Connected show was an hour long and also included Scott and Tyler interviewing Rob Rose on big data and other topics. You can see the entire show here, or just the 12 twelve minute interview with me below.

Private Companies and Public Companies – Sizing up IT Spending

One aspect of the Global 5000 company database is that we include all types, shapes and locations of companies including those that are publicly listed as well as private firms. For those who sell to corporations (as opposed to consumers) there is a great deal of interest in private companies. A lot of this can be attributed to the fact that public companies have to disclose so much about their size, shape and all aspects of their organizations – most everyone knows or can find out what they need to. Privates, on the other hand, are less well known and hold the allure that there is great, undiscovered opportunity in there.

To get a sense of the dynamics of the public/private we examined a number of metrics related to companies in the Global 5000 database.  It is true that more large companies are publicly traded. Of the 5000 companies, nearly 4,000 are public and just over 1,000 are private. That is the inverse of the market as a whole where most companies in any country or industry are private. Here are a few facts about each group.

  • The average revenue for a public company in the Global 5000 is $10.3 billion while the private companies averaged $10.6 billion
  • Public companies reported an average revenue per employee of $214,000 while private companies were just over $282,000
  • For both 2010 and 2011, revenue for both public and private companies grew by slightly more than 11.5%. Virtually no difference.
  • In both cases, IT spending per company is over $290 million and approximately 2.7% of revenue.
  • Total IT spending for Global 5000 public companies is approximately $1.1 trillion while private Global 5000 companies will spend about $300 billion.

The bottom line here is that big is big. It does not make much difference if the company is public or private, the big guys will spend a lot on a wide variety of products and services including IT products and services. The real difference is in the number of these large opportunities there are. Just because we find a few of these nuggets among the privates, does not mean all privates look alike.  Most are quite a bit smaller.

Learn more about more the Global 5000 database

The Flip Side of IT Spending and Productivity

In our last post we explored the companies in The Global 5000 that showed the biggest gains in revenue per employee AND spent the most on IT.  The idea is that this group will continue to spend and strive for continuous improvements — making some great potential targets for those IT suppliers that can show their offerings help save money.

Now, we turn the page and explore the other end of the spectrum. Again, taking companies in the Global 5000 data base we now look at the bottom 2000 companies in terms of revenue per employee change  That is — they are not on a positive track. From this group we then took the lowest 1000 firms in terms of IT spending.

We can look at this set of companies in one of two ways – either:

  • they are ripe opportunities who will need to invest in order to grow their revenue faster or get more productivity out of the existing workforce
  • OR – they are not going any further with technology spending and their growth is not going to be via increasing spending per employee.

We should run to the first group and run away from the second.  Here is the profile of these 1,000 companies where these industries have traditionally been a challenge for the IT suppliers.

The top countries are:

  • USA
  • UK
  • Japan
  • Canada
  • France
  • Spain

And the top industries:

  • Industrial Manufacturers
  • Retailers
  • Consumer Goods Manufacturers
  • Business Services
  • Construction

For more information about The Global 5000 database click here

 

Why Big Data is important to Gilbane Conference attendees

If you think there is too much hype, and gratuitous use of the term, big data, you haven’t seen anything yet. But don’t make the mistake of confusing the hype with how fundamental and how transformational big data is and will certainly be. Just turn your hype filter to high and learn enough about it to make your own judgements about how it will affect your business and whether it is something you need to do something about now, or monitor for future planning.

As I said yesterday in a comment on a post by Sybase CTO Irfan Khan Gartner dead wrong about big data hype cycle (with a response from Gartner):

However Gartner’s Hype Cycle is interpreted I think it is safe to say that most, including many analysts, underestimate how fundamental and how far-reaching big data will be. How rapidly its use will evolve, and in which applications and industries first, is a more difficult and interesting discussion. The twin brakes of a shortage of qualified data scientist skills and the costs and complexities of IT infrastructure changes will surely slow things down and cause disillusionment. On the other hand we have all been surprised by how fast some other fundamental changes have ramped up, and BDaaS (Big Data as a Service) will certainly help accelerate things. There is also a lot more big data development and deployment activity going on than many realize – it is a competitive advantage after all.

There is also a third “brake” which is all the uncertainty around privacy issues. There is already a lot of consumer data that is not being fully used because of fear of customer backlash or new regulation and, one hopes, because of a degree of respect for consumer’s privacy.

Rob Rose expanded on some specific concerns of marketers in a recent post Big Data & Marketing – It’s A Trap!, including the lack of resources for interpreting even the current mostly website analytics data marketers already have. It’s true, and not just for smaller companies. In addition there are at least four requirements for making big data analytics accessible to marketers that are largely beyond the reach of most current organizations.

Partly to the rescue is Big Data as a Service BDaaS (one of the more fun-sounding acronyms). BDaaS is going to be a huge business. All the big technology infrastructure firms are getting involved and all the analytics vendors will all have cloud and big data services. There are also many new companies including some surprises. For example, after developing its own Hadoop-based big data analytics expertise Sears created subsidiary MetaScale to provide BDaaS to other enterprises. Ajay Agarwal from Bain Capital Ventures predicts that the confluence of big data and marketing will lead to several new multi-billion dollar companies and I think he is right.

But while big data is important for the marketers, content managers, and IT who attend our conference because of the potential for enhanced predictive analytics and content marketing. The reach and value of big data applications is far broader than marketing – executives need to understand the potential for new efficiencies, products and businesses. The well-known McKinsey report “Big Data: The Next Frontier for Innovation, Competition, and Productivity” (free) is a good place to start. If you are in the information business I focus on that in my report Big-Data: Big Deal or Just Big Buzz? (not free).

Big data presentations at Gilbane Boston

This year we have six presentations on big data, two devoted to big data and marketing and all chosen with an eye towards the needs of our audience of marketers, content strategists, and IT. You can find out more about these presentations, including their date and time on the conference program.

Keynote

Bill Simmons, CTO, DataXu
Why Marketing Needs Big Data

Main Conference Presentations

Tony Jewitt, VP Big Data Solutions at Avalon Consulting, LLC
“Big Data” 101 for Business

Bryan Bell, Vice President, Enterprise Solutions, Expert System
Semantics and the Big Data Opportunity

Brian Courtney, General Manager of Operations Data Management, GE Intelligent Platforms
Leveraging Big Data Analytics

Darren Guarnaccia, Senior VP, Product Marketing, Sitecore
Big Data: What’s the Promise and Reality for Marketers?

Stefan Andreasen, Founder and Chief Technology Officer, Kapow Software
Big Data: Black Hole or Strategic Value?

Update: There is now a video of me being interviewed on big data by CMS-Connected.

IT Spending and Productivity Improvements in Global 5000 Companies

One of the simple questions that business management has to ask when considering new spending is “will this help me make money or save money?” If the answer is not clear to either of those choices, it is hard to see that investment happening. It does not matter if this pertains to IT spending, a new facility or any other kind of major outlay. There has been a great deal of research conducted  in past years showing that the investment in technology does, in fact, lead to an increase in productivity in many cases.

A convenient way to look at this is to simply calculate the revenue per employee figures for a company and compare them to your peers. We did this recently with The Global 5000 companies and took it a step further.

First we looked at companies in the Global 5000 list that have shown an increase in their revenue per employee ratios over the past two years. We selected the top 2,000 based on the largest percent increases in revenue per employee figures. Next, from this top 2000, we looked their corporate IT spending and ranked them from largest to smallest and selected the top 1,000 IT spenders out of the selection.

Therefore, the group of 1,000 we have examined are those growing revenue per employee the fastest and spend the most on IT.  A reasonable assumption would be that will continue to spend and strive for continuous improvements — making some great potential targets for those that can show their offerings help save money.

Our list of the best 1,000 for this selection are in these industry groups:

  • Financial Services
  • Large Industrials
  • Oil & Gas
  • Technology companies
  • Basic Materials
  • Business Services

And the leading countries for these key targets are:

  • USA
  • Japan
  • China
  • UK
  • Germany
  • France
  • Canada
  • Switzerland
  • Australia
  • Brazil

In our next post, we will flip this analysis and look at those that are not growing revenue per employee and do not spend a lot on IT — those may be an opportunity in waiting or places to avoid spending a lot of time on.

You can find more information about The Global 5000 database by clicking here

 

« Older posts Newer posts »

© 2025 The Gilbane Advisor

Theme by Anders NorenUp ↑