Archive for Analytics

Building the analytics you need to monetize your innovation

Join us at the Gilbane Conference in Boston December 1-3 and learn how your peers are building superior digital experiences for customers and employees. If you haven’t reviewed your analytics for effectiveness in a while, or are wondering if you are collecting the right metrics to support your business objectives, this in-depth workshop is for you.

Great Ideas Need the Right Metrics to Flourish: Building the Analytics You Need to Monetize your Innovation

For digital innovators, Analytics and data-driven decision-making have become key determinants of success. “If you can measure it, you can manage it.” The right metrics often make the difference between monetizing innovation and under-performance.

Yet identifying these “metrics that matter” isn’t easy—the right metrics vary widely based on your business model—nor is it easy to build the required capabilities and collecting the necessary data. Fortunately there is a way to make it easier, and this presentation will share a better way to tackle the challenge.

In this workshop, author and analytics veteran Jaime Fitzgerald will share his battle-tested method that addresses this challenge. During two decades working with data, Mr. Fitzgerald created a new method that makes it easier to define the metrics you really need to monetize your innovative ideas, business models, and initiatives. In addition to defining the “metrics that matter,” Mr. Fitzgerald’s methodology defines the analytic methods and data sources you need to generate these key performance indicators, and how they will be used to enhance key business decisions, essential processes, and business model evolution.

Instructor: Jaime Fitzgerald, Founder & Managing Partner, Fitzgerald Analytics
Tuesday, December, 1: 1:00 p.m. – 4:00 p.m. • Fairmont Copley Plaza Hotel, Boston

This workshop is included in the ConferencePlus package. Save $200 on the ConferencePlus and Conference Only options. To get your Bluebill discount use priority code 200BB when registering online.

I would like my $200 registration discount – code 200BB

 

What big companies are doing with big data today

The Economist has been running a conference largely focused on Big Data for three years. I wasn’t able to make it this year, but the program looks like it is still an excellent event for executives to get their hands around the strategic value, and the reality, of existing big data initiatives from a trusted source. Last month’s conference, The Economist’s Ideas Economy: Information Forum 2013, included an 11 minute introduction to a panel on what large companies are currently doing and on how boardrooms are looking at big data today that is almost perfect for circulating to c-suites. The presenter is Paul Barth, managing partner at NewVantage Partners.

Thanks to Gil Press for pointing to the video on his What’s The Big Data? blog.

The Analyst’s Lament: Big Data Hype Obscures Data Management Problems in the Enterprise

I’ve been a market and product analyst for large companies. I realize that my experiences are a sample of one, and that I can’t speak for my analyst peers. But I suspect some of them would nod in recognition when I say that in those roles, I spent only a fraction of my time in these analyst roles actually conducting data analysis.  With the increase in press that Big Data has received, I started seeing a major gap between what I was reading about enterprise data trends, and my actual experiences working with enterprise data.

A more accurate description of what I spent large amounts of time doing was data hunting. And data gathering, and data cleaning, and data organizing, and data checking.  I spent many hours trying to find the right people in various departments who “owned” different data sources. I then had to get locate definitions (if they existed – this was hit or miss) and find out what quirks the data had so I could clean it without losing records (for example, which of the many data fields with the word “revenue” in it would actually give me revenue). In several cases I found myself begging fellow overworked colleagues to please, please, pull the data I needed from that database which I in theory should have had access to but was shut out of due to multiple layers of bureaucracy and overall cluelessness as to what data lived where within the organization.

Part of me thought, “Well, this is the lot of an analyst in a large company. It is the job.” And this was confirmed by other more senior managers – all on the business side, not in the IT side – who asserted that, yes, being a data hunter/gatherer/cleaner/organizer/checker was indeed my job. But another part of me was thinking, “These are all necessary tasks in dealing with data. I will always need to clean data no matter what. I will need to do some formatting and re-checking to make sure what I have is correct. But should this be taking up such a large chunk of my time? This is not the best way I can add value here. There are too many business questions I could potentially be trying to help solve; there has got to be a better way.”

So initially I thought, not being an IT professional, that this was an issue of not having the right IT tools. But gradually I came to understand that technology was not the problem. More often than not, I had access to best-in-class CRM systems, database and analytics software, and collaboration tools at my disposal. I had the latest versions of Microsoft Office and a laptop or desktop with decent processing power. I had reliable VPN connectivity when I was working remotely and often a company-supplied mobile smartphone. It was the processes and people that were the biggest barriers to getting the information I needed in order to provide fact-based research that could be used to solve business-critical decisions.

Out of sheer frustration, I started doing some research to see if there was indeed a better way for enterprises to manage their data. Master Data Management (MDM), you’ve been around for over a decade, why haven’t I ever encountered you?  A firm called the Information Difference, a UK-based consultancy which specializes in MDM, argues that too often, decisions about data management and data governance are left solely to the IT department. The business should also be part of any MDM project, and the governance process should be sponsored and led by C-level business management. Talk about “aha” moments.  When I read this, I actually breathed a sigh of relief. It isn’t just me that thinks there has to be a better way to go, so that the not-cheap business and market analysts that enterprises the world over employ can actually spend more of their time solving problems and less time data wrangling!

That’s why when I read the umpteenth article/blog post/tweet about how transformative Big Data is and will be, I cannot help but groan.  Before enterprises begin to think about new ways about structuring and distributing data, they need to do an audit of how existing data is already used within and between different businesses.  In particular, they should consider MDM if that has not already been implemented. There is so much valuable data that already exists in the enterprise, but the business and IT have to actually work together to deploy and communicate about data initiatives. They also need to evaluate if and how enterprise data is being used effectively for business decisions, and if that usage meets compliance and security rules.

I suspect that many senior IT managers know this and agree. I also suspect that getting counterparts in the business to be active and own decisions about enterprise data, and not just think data is an IT issue, can be a challenge. But in the long run, if this doesn’t happen more often, there’s going to be a lot of overpaid, underutilized data analysts out there and missed business opportunities. So if you are an enterprise executive wondering “do I have to worry about this Big Data business?” please take a step back and look at what you already have.  And if you know any seasoned data analysts in your company, maybe even talk to them about what would make them more effective and faster at their job. The answer may be simpler than you think.

Big data and decision making: data vs intuition

There is certainly hype around ‘big data’, as there always has been and always will be about many important technologies or ideas – remember the hype around the Web? Just as annoying is the backlash anti big data hype, typically built around straw men – does anyone actually claim that big data is useful without analysis?

One unfair characterization both sides indulge in involves the role of intuition, which is viewed either as the last lifeline for data-challenged and threatened managers, or as the way real men and women make the smart difficult decisions in the face of too many conflicting statistics.

Robert Carraway, a professor who teaches Quantitative Analysis at UVA’s Darden School of Business, has good news for both sides. In a post on big data and decision making in Forbes, “Meeting the Big Data challenge: Don’t be objective” he argues “that the existence of Big Data and more rational, analytical tools and frameworks places more—not less—weight on the role of intuition.”

Carraway first mentions Corporate Executive Board’s findings that of over 5000 managers 19% were “Visceral decision makers” relying “almost exclusively on intuition.” The rest were more or less evenly split between “Unquestioning empiricists” who rely entirely on analysis and “Informed skeptics … who find some way to balance intuition and analysis.” The assumption of the test and of Carraway was that Informed skeptics had the right approach.

A different study, “Frames, Biases, and Rational Decision-Making in the Human Brain“, at the Institute of Neurology at University College London tested for correlations between the influence of ‘framing bias’ (what it sounds like – making different decisions for the same problem depending on how the problem was framed) and degree of rationality. The study measured which areas of the brain were active using an fMRI and found the activity of the the most rational (least influenced by framing) took place in the prefrontal cortex, where reasoning takes place; the least rational (most influenced by framing / intuition) had activity in the amygdala (home of emotions); and the activity of those in between (“somewhat susceptible to framing, but at times able to overcome it”) in the cingulate cortex, where conflicts are addressed.

It is this last correlation that is suggestive to Carraway, and what he maps to being an informed skeptic. In real life, we have to make decisions without all or enough data, and a predilection for relying on either data or intuition can easily lead us astray. Our decision making benefits by our brain seeing a conflict that calls for skeptical analysis between what the data says and what our intuition is telling us. In other words, intuition is a partner in the dance, and the implication is that it is always in the dance — always has a role.

Big data and all the associated analytical tools provide more ways to find bogus patterns that fit what we are looking for. This makes it easier to find false support for a preconception. So just looking at the facts – just being “objective” – just being “rational” – is less likely to be sufficient.

The way to improve the odds is to introduce conflict – call in the cingulate cortex cavalry. If you have a pre-concieved belief, acknowledge it and and try and refute, rather than support it, with the data.

“the choice of how to analyze Big Data should almost never start with “pick a tool, and use it”. It should invariably start with: pick a belief, and then challenge it. The choice of appropriate analytical tool (and data) should be driven by: what could change my mind?…”

Of course conflict isn’t only possible between intuition and data. It can also be created between different data patterns. Carraway has an earlier related post, “Big Data, Small Bets“, that looks at creating multiple small experiments for big data sets designed to minimize identifying patterns that are either random or not significant.

Thanks to Professor Carraway for elevating the discussion. Read his full post.

Integrating External Data & Enhancing Your Prospects

Most companies with IT account teams and account selling strategies have a database in a CRM system and the company records in that database generally have a wide range of data elements and varying degrees of completeness. Beyond the basic demographic information, some records are more complete than others with regard to providing information that can tell the account team more about the drivers of sales potential. In some cases, this additional data may have been collected by internal staff, in other cases, it may be the result of purchased data from organizations like Harte-Hanks, RainKing, HG Data or any number of custom resources/projects.

There are some other data elements that can be added to your database from freely available resources. These data elements can enhance the company records by showing which companies will provide better opportunities. One simple example we use in The Global 5000 database is the number of employees that have a LinkedIn profile. This may be an indicator that companies with a high percentage of social media users are more likely to purchase or use certain online services. That data is free to use. Obviously, that indicator does not work for every organization and each company needs to test the data correlation between customers and the attributes, environment or product usage.

Other free and interesting data can be found in government filings. For example, any firm with benefit and 401k plans must file federal funds and that filing data is available from the US government. A quick scan of the web site data.gov  shows a number of options and data sets available for download and integration into your prospect database. The National Weather Center, for example, provides a number of specific long term contracts which can be helpful for anyone selling to the agriculture market.

There are a number things that need to be considered when importing and appending or modeling external data. Some of the key aspects include:

  • A match code or record identifier whereby external records can be matched to your internal company records. Many systems use the DUNS number from D&B rather than trying to match on company names which can have too many variations to be useful.
  • The CRM record level needs to be established so that the organization is focused on companies at a local entity level or at the corporate HQ level.  For example, if your are selling multi-national network services, having lots of site recrods is probably not helpful when you most likely have to sell at the corporate level.
  • De-dupe your existing customers. When acquiring and integrating an external file — those external sources won’t know your customer set and you will likely be importing data about your existing customers. If you are going to turn around and send this new, enhanced data to your team, it makes sense to identify or remove existing clients from that effort so that your organization is not marketing to them all over again.
  • Identifying the key drivers that turn the vast sea of companies into prospects and then into clients will provide a solid list of key data attributes that can be used to append to existing records.  For example, these drivers may include elements such as revenue growth, productivity measures such as revenue per employee, credit ratings, multiple locations or selected industries.

In this era of marketing sophistication with increasing ‘tons’ of Big Data being available and sophisticated analytical tools coming to market every company has the opportunity to enhance their internal data by integrating external data and going to market armed with more insight than ever before.

Learn more about more the [yellow]Global 5000 database[/yellow]

 

Harry Henry’s Global 5000 Insights

Colleague and market research expert Harry Henry is filling a hole in the company research market with his Global 5000 database of the 5000 largest global companies, including both public and private businesses. This is already an important resource for marketers who need to understand global market opportunities more than they ever have before – and that most likely means you, since most of our readers are from mid-to-large size companies who either are or should be growing their international business.

While we focus on the information technology strategies for reaching and engaging with customers and colleagues everywhere, you still need to decide which markets and regions, which industries, and which leading companies to target for growth. Harry has generously agreed to provide regular posts providing insights from his database to help inform those decisions.

Read Harry’s first post China Eyes Canadian Energy Resources. You can follow Harry’s posts on this blog at http://bluebillinc.com/author/hhenry/. Or you can reach him directly.

China Eyes Canadian Energy Resources

One of the interesting news announcements this week, was about CNOOC of China buying Nexen of Canada – an energy exploration company.  CNOOC is a $38 billion company and Nexen reported revenues of $6.3 billion in 2011.

For any company looking at global markets, there are some interesting developments wrapped up in this announcement.

This is a major step for a Chinese company, a major step in the energy industry and a major step for Canada.  Consider a few facts compiled from our Global 5000 database.

  • Canada’s energy assets are substantial. Looking at Global 5000 companies in Canada, right behind Financial Services, Oil & Gas and Mining are the next 2 largest industries representing 27% of the largest companies in Canada. So, as the world thirst for natural resources and energy continues to climb … Canada will get more attention.
  • Looking at growth rates over the past few years, China has grown faster than the rest of the market. So has the Oil & Gas industry as well as the Mining industry. The total Global 5000 grew 11.4% in 2010 and 12% in 2011. China based Global 5000 companies grew 33.5% and 30% in 2011. Oil & Gas firms reported growth of 22% and 24% for those same years while mining companies grew even more at 40% in 2010 and 30% this past year. So, this deal hits right at the heart of a number of growth segments.
  • This is a second big deal by a Chinese company in the North American market — see our article earlier this year on the Chinese bank ICBC entering the US market via an acquisition.

The bottom line here is that China’s economy is huge, its growth — even at lower rates — is still a huge differential and it has a continually increasing need for energy resources. Canadian companies have those resources so we can expect more deals and activity.

For more information about The Global 5000 and companies like these that are included, visit the database page.

First group of Gilbane sponsors posted for Boston conference

Conference planning is starting to ramp up. See our first group of Gilbane sponsors, and don’t forget the call for papers!

Making big data analytics accessible to marketers

The recent announcement of SAS Visual Analytics highlights four important characteristics of big data that are key to the ability of marketing organizations to use big analytic data effectively:

  • Visualization is a challenge for big data analysis and we’ll continue to see new approaches to presenting and interacting with it. Better visualization tools are necessary not just because those who aren’t data scientists need to understand and work with the data, but because the increased efficiency and time-to-reaction to the data is critical in many cases – especially for marketers who need to react with lightening speed to current user experiences.
  • In case it isn’t obvious, visualization tools need to work where marketers can access them on web and mobile platforms.
  • In-memory data processing is necessary to support the required speed of analysis. This is still rare.
  • Big data is not only about unstructured data. Relational data and database tools are still important for incorporating structured data.

SAS is far from the only company driving new big data analytic technology, but they are the biggest and seem determined to stay on the front edge.

Why marketing is the next big money sector in technology

Ajay Agarwal from Bain Capital Ventures predicts that because of the confluence of big data and marketing Marketing is the next big money sector in technology and will lead to several new multi-billion dollar companies. His post is succinct and convincing, but there are additional reasons to believe he is correct.

Marketing spending more on IT than IT

Ajay opens his post with a quote from Gartner Group: “By 2017, a CMO will spend more on IT than the CIO”. It is difficult to judge this prediction without evaluating the supporting research, but it doesn’t sound unreasonable and the trend is unmistakable. Our own experience as conference organizers and consultants offers strong support for the trend. We cover the use of web, mobile, and content technologies for enterprise applications, and our audience has historically been 50% IT and 50% line of business or departmental. Since at least 2008 there has been a pronounced and steady increase in the percentage of marketers in our audience, so that 40% or more of attendees are now either in marketing, or in IT but assigned to marketing projects – this is about double what it was in earlier years. While web content management vendors have moved aggressively to incorporate marketing-focused capabilities and are now broadly positioned as hubs for customer engagement, the real driver is the success of the web. Corporate web sites have become the organizations’ new front door; companies have recognized this; and marketers are demanding tools to manage the visitor experience. Even during the peak of the recession spending on web content management, especially for marketing applications, was strong.

“Cloud” computing and workforce demographics have also beefed up marketers’ mojo. The increased ability to experiment and deploy applications without the administrative overhead and cost of IT or of software licenses has encouraged marketers to learn more about the technology tools they need to perform and helped instill the confidence necessary to take more control over technology purchases. A younger more tech-savvy workforce adds additional assertiveness to marketing (and all) departments. Now if only marketers had more data scientists and statisticians to work with…

Big data and big analytics

Big data has not caused, or contributed very much, to the increase in marketing spending to-date. Certainly there are very large companies spending lots of money on analyzing vast amounts of customer data from multiple sources, but most companies still don’t have enough data to warrant the effort of implementing big data technologies and most technology vendors don’t yet support big data technologies at all, or sufficiently. I agree with Ajay though that the “several multi-billion dollar” marketing technology companies that may emerge will have to have core big data processing and analytic strengths.

And not just because of the volume. One of the main reasons for the enterprise software bias for back office applications was that front office applications beyond simple process automation and contact data collection were just too difficult because they required processing unstructured, or semi-structured, data. Big data technologies don’t address all the challenges of processing unstructured data, but they take us a long way as tools to manage it.

The level of investment in this space is much greater than most realize. Ajay is right to invest in it, but he is not alone.