Category: Management

Big data and decision making: data vs intuition

There is certainly hype around ‘big data’, as there always has been and always will be about many important technologies or ideas – remember the hype around the Web? Just as annoying is the backlash anti big data hype, typically built around straw men – does anyone actually claim that big data is useful without analysis?

One unfair characterization both sides indulge in involves the role of intuition, which is viewed either as the last lifeline for data-challenged and threatened managers, or as the way real men and women make the smart difficult decisions in the face of too many conflicting statistics.

Robert Carraway, a professor who teaches Quantitative Analysis at UVA’s Darden School of Business, has good news for both sides. In a post on big data and decision making in Forbes, “Meeting the Big Data challenge: Don’t be objective” he argues “that the existence of Big Data and more rational, analytical tools and frameworks places more—not less—weight on the role of intuition.”

Carraway first mentions Corporate Executive Board’s findings that of over 5000 managers 19% were “Visceral decision makers” relying “almost exclusively on intuition.” The rest were more or less evenly split between “Unquestioning empiricists” who rely entirely on analysis and “Informed skeptics … who find some way to balance intuition and analysis.” The assumption of the test and of Carraway was that Informed skeptics had the right approach.

A different study, “Frames, Biases, and Rational Decision-Making in the Human Brain“, at the Institute of Neurology at University College London tested for correlations between the influence of ‘framing bias’ (what it sounds like – making different decisions for the same problem depending on how the problem was framed) and degree of rationality. The study measured which areas of the brain were active using an fMRI and found the activity of the the most rational (least influenced by framing) took place in the prefrontal cortex, where reasoning takes place; the least rational (most influenced by framing / intuition) had activity in the amygdala (home of emotions); and the activity of those in between (“somewhat susceptible to framing, but at times able to overcome it”) in the cingulate cortex, where conflicts are addressed.

It is this last correlation that is suggestive to Carraway, and what he maps to being an informed skeptic. In real life, we have to make decisions without all or enough data, and a predilection for relying on either data or intuition can easily lead us astray. Our decision making benefits by our brain seeing a conflict that calls for skeptical analysis between what the data says and what our intuition is telling us. In other words, intuition is a partner in the dance, and the implication is that it is always in the dance — always has a role.

Big data and all the associated analytical tools provide more ways to find bogus patterns that fit what we are looking for. This makes it easier to find false support for a preconception. So just looking at the facts – just being “objective” – just being “rational” – is less likely to be sufficient.

The way to improve the odds is to introduce conflict – call in the cingulate cortex cavalry. If you have a pre-concieved belief, acknowledge it and and try and refute, rather than support it, with the data.

“the choice of how to analyze Big Data should almost never start with “pick a tool, and use it”. It should invariably start with: pick a belief, and then challenge it. The choice of appropriate analytical tool (and data) should be driven by: what could change my mind?…”

Of course conflict isn’t only possible between intuition and data. It can also be created between different data patterns. Carraway has an earlier related post, “Big Data, Small Bets“, that looks at creating multiple small experiments for big data sets designed to minimize identifying patterns that are either random or not significant.

Thanks to Professor Carraway for elevating the discussion. Read his full post.

Technology and IT Spending Metric Options

When planning for global market growth and sizing up the opportunities in various countries, there is often a lack of data available from various industry sources. One could look at GDP figures or population data by country – both of those have some limitations. A better gauge might be to look at those business entities that generate the most revenue in each country as they will help contribute to other businesses in the geography and in general, raise the level of B2B activity overall.

Diving into the data of the Global 5000 companies – the 5000 largest companies in the world based on revenue – we find a couple of different ways to help guide your estimates of market size and rank order.

The first list is the top 10 countries by number of firms in our Global 5000 database with HQ in the country.

  • USA – 2148
  • Japan – 334
  • China – 221
  • UK – 183
  • Canada – 124
  • Germany – 98
  • France – 84
  • Australia – 77
  • India – 76
  • Italy – 65

For each company in the database, there is an estimate for the amount spent on IT – both internal and external costs. When we take those amounts for each country and look at the average IT spending for these leading firms, we see a different order of countries which would also prove to be attractive targets.

  • France – $902 million per company
  • Germany
  • Netherlands
  • Spain
  • Venezuela
  • Italy
  • China
  • Switzerland
  • South Korea
  • New Zealand – $545 million per company

Of course, all these companies are the biggest of the big and not all companies in that country will spend at that level — but it is indicative of the relative IT spending on a country basis and again shows some of the potential for attractive markets as you eye global opportunities.

Learn more about more the [yellow]Global 5000 database[/yellow]

Coherence and Augmentation: KM-Search Connection

This space is not normally used to comment on knowledge management (KM), one of my areas of consulting, but a recent conference gives me an opening to connect the dots between KM and search. Dave Snowden and Tom Stewart always have worthy commentary on KM and as keynote speakers they did not disappoint at KMWorld. It may seem a stretch but by taking a few of their thoughts out of context, I can synthesize a relationship between KM and search.

KMWorld, Enterprise Search Summit, SharePoint Symposium and Taxonomy Boot Camp moved to Washington D.C. for the 2010 Fall Conference earlier this month. I attended to teach a workshop on building a semantic platform, and to participate in a panel discussion to wrap up the conference with two other analysts, Leslie Owen and Tony Byrne with Jane Dysart moderating.

Comments from the first and last keynote speakers of the conference inspired my final panel comments, counseling attendees to lead by thoughtfully leveraging technology only to enhance knowledge. But there were other snippets that prompt me to link search and KM.

Tom Stewart’s talk was entitled, Knowledge Driven Enterprises: Strategies & Future Focus, which he couched in the context of achieving a “coherent” winning organization. He explained that to reach the coherence destination requires understanding of different types of knowledge and how we need to behave for attaining each type (e.g. “knowable complicated “knowledge calls for experts and research; “emergent complex” knowledge calls for leadership and “sense-making.”).

Stewart describes successful organizations as those in which “the opportunities outside line up with the capabilities inside.” He explains that those “companies who do manage to reestablish focus around an aligned set of key capabilities” use their “intellectual capital” to identify their intangible assets,” human capability, structural capital, and customer capital. They build relationship capital from among these capabilities to create a coherent company. Although Stewart does not mention “search,” it is important to note that one means to identify intangible assets is well-executed enterprise search with associated analytical tools.

Dave Snowden also referenced “coherence,” (messy coherence), even as he spoke about how failures tend to be more teachable (memorable) than successes. If you follow Snowden, you know that he founded the Cognitive Edge and has developed a model for applying cognitive learning to help build resilient organizations. He has taught complexity analysis and sense-making for many years and his interest in human learning behaviors is deep.

To follow the entire thread of Snowden’s presentation on the “The Resilient Organization” follow this link. I was particularly impressed with his statement about the talk, “one of the most heart-felt I have given in recent years.” It was one of his best but two particular comments bring me to the connection between KM and search.

Dave talked about technology as “cognitive augmentation,” its only truly useful function. He also puts forth what he calls the “three Golden rules: Use of distributed cognition, wisdom but not foolishness of crowds; finely grained objects, information and organizational; and disintermediation, putting decision makers in direct contact with raw data.”

Taking these fragments of Snowden’s talk, a technique he seems to encourage, I put forth a synthesized view of how knowledge and search technologies need to be married for consequential gain.

We live and work in a highly chaotic information soup, one in which we are fed a steady diet of fragments (links, tweets, analyzed content) from which we are challenged as thinkers to derive coherence. The best knowledge practitioners will leverage this messiness by detecting weak signals and seek out more fragments, coupling them thoughtfully with “raw data” to synthesize new innovations, whether they be practices, inventions or policies. Managing shifting technologies, changing information inputs, and learning from failures (our own, our institution’s and others) contributes to building a resilient organization.

So where does “search” come in? Search is a human operation and begins with the workforce. Going back to Stewart who commented on the need to recognize different kinds of knowledge, I posit that different kinds of knowledge demand different kinds of search. This is precisely what so many “enterprise search” initiatives fail to deliver. Implementers fail to account for all the different kinds of search, search for facts, search for expertise, search for specific artifacts, search for trends, search for missing data, etc.

When Dave Snowden states that “all of your workforce is a human scanner,” this could also imply the need for multiple, co-occurring search initiatives. Just as each workforce member brings a different perspective and capability to sensory information gathering, so too must enterprise search be set up to accommodate all the different kinds of knowledge gathering. And when Snowden notes that “There are limits to semantic technologies: Language is constantly changing so there is a requirement for constant tuning to sustain the same level of good results,” he is reminding us that technology is only good for cognitive augmentation. Technology is not a “plug ‘n play,” install and reap magical cognitive insights. It requires constant tuning to adapt to new kinds of knowledge.

The point is one I have made before; it is the human connection, human scanner and human understanding of all the kinds of knowledge we need in order to bring coherence to an organization. The better we balance these human capabilities, the more resilient we’ll be and the better skilled at figuring out what kinds of search technologies really make sense for today, and tomorrow we had better be ready for another tool for new fragments and new knowledge synthesis.

What Does an Analyst Do for You?

Among the roles that I have chosen for myself as Lead Analyst for Enterprise Search at the Gilbane Group is to evaluate, in broad strokes, the search marketplace for internal use at enterprises of all types. My principal audience is those within enterprises that may be involved in the selection, procurement, implementation and deployment of search technology to benefit their organizations. In this role, I am an advocate for buyers. However, when vendors pay attention to what I write it should help them understand the buyer’s perspective. Ultimately, good vendors incorporate analyst guidance into their thinking about how to serve their customer better.
We do not hide the fact that, as industry analysts, we also consult to various content software companies. When doing so, I try to keep in mind that the market will be served best when I honestly advocate for software and service improvements that will benefit buyers. This is a value to those who sell and those who buy software. My consulting to vendors indirectly benefits both audiences.
Analysts also consult to buyers, to help them make informed decisions about technology decisions and business relationships. I particularly enjoy and value those experiences because what I learn about enterprise buyers’ needs and expectations can translate directly into advice to vendors. This is an honest brokering role that comes naturally because I have been a software vendor and also in a position to make many software procurement decisions, particularly tools and applications that were used by my development and service teams. I’m always enthusiastic to be in a position to share important information about products with buyers and information about buying audiences with those who build products. This can be done effectively while preserving confidentiality on both sides and making sure that everyone gets something out of the communications.
As an analyst, I receive a lot of requests by vendors to listen to, by phone and Web, briefings on their products, or to meet, one-on-one with their executives. You may have noticed that I don’t write reviews of specific products although, in a particular context, I may reference products and applications. While we understand the reason that product vendors want analysts to pay attention to them, I don’t find briefings particularly enlightening unless I know nothing about a company and its offerings. For these types of overviews, I can usually find what I want to know on their Web site, in press releases and by poking around the Web. During briefings I want to drive the conversation toward user experiences and needs.
What I do like to do is talk to product users about their experiences with a vendor or a product. I like to know what the implementation and adoption experience is like and how their organization had been affected by product use, both benefits and drawbacks. It is not always easy to gain access to customers but I have ways of finding them and also encourage readers of this blog to reach out with your stories. I am delighted to learn more through comments to the blog, an email or phone call. If you are willing to chat with me for a while, I will call you at your convenience.
The original topic I planned to write about this week will have to wait because, after receiving over 20 invitations to “be briefed” in the past few days, I decided it was more important to let readers know who I want to be briefed by – search technology users are my number one target. Vendors please push your customers in this direction if you want me to pay attention. This can bring you a lot of value, too. It is a matter of trust.

Only Humans can Ensure the Value of Search in Your Enterprise

While considering what is most important in selecting the search tools for any given enterprise application, I took a few minutes off to look at the New York Times. This article, He Wrote 200,000 Books (but Computers Did Some of the Work), by Noam Cohen, gave me an idea about how to compare Internet search with enterprise search.
A staple of librarians’ reference and research arsenal has been a category of reference material called “bibliographies of bibliographies.” These works, specific to a subject domain, are aimed at a usually scholarly audience to bring a vast amount of content into focus for the researcher. Judging from the article, that is what Mr. Parker’s artificial intelligence is doing for the average person who needs general information about a topic. According to at least one reader, the results are hardly scholarly.
This article points out several things about computerized searching:

  • It does a very good job of finding a lot of information easily.
  • Generalized Internet searching retrieves only publicly accessible, free-for-consumption, content.
  • Publicly available content is not universally vetted for accuracy, authoritativeness, trustworthiness, or comprehensiveness, even though it may be all of these things.
  • Vast amounts of accurate, authoritative, trustworthy and comprehensive content does exist in electronic formats that search algorithms used by Mr. Parker or the rest of us on the Internet will never see. That is because it is behind-the-firewall or accessible only through permission (e.g. subscription, need-to-know). None of his published books will serve up that content.

Another concept that librarians and scholars understand is that of primary source material. It is original content, developed (written, recorded) by human beings as a result of thought, new analysis of existing content, bench science, or engineering. It is often judged, vetted, approved or otherwise deemed worthy of the primary source label by peers in the workplace, professional societies or professional publishers of scholarly journals. It is often the substance of what get republished as secondary and tertiary sources (e.g. review articles, bibliographies, books).
We all need secondary and tertiary sources to do our work, learn new things, and understand our work and our world better. However, advances in technology, business operations, and innovation depend on sharing primary source material in thoughtfully constructed domains in our enterprises of business, healthcare, or non-profits. Patient’s laboratory or mechanical device test data that spark creation of primary source content need surrounding context to be properly understood and assessed for value and relevancy.
To be valuable enterprise search needs to deliver context, relevance, opportunities for analysis and evaluation, and retrieval modes that give the best results for any user seeking valid content. There is a lot that computerized enterprise search can do to facilitate this type of research but that is not the whole story. There must still be real people who select the most appropriate search product for that enterprise and that defined business case. They must also decide content to be indexed by the search engine based on its value, what can be secured with proper authentication, how it should be categorized appropriately, and so on. To throw a computer search application at any retrieval need without human oversight is a waste of capital. It will result in disappointment, cynicism and skepticism about the value of automating search because the resulting output will be no better than Mr. Parker’s books.

Leading Enterprise Initiatives or Reacting to Crisis

My theme leading into the Gilbane Boston Conference this week comes straight from the headlines and New Hampshire political ads that manage to spill over the border into our fair Commonwealth of Massachusetts. If you live outside of the zone of early caucus and primary states, you are probably spared the ad nauseam recitations of all the crises that Rudy Giuliani has met and conquered. In thinking about our collective longing for a true leader in the White House, I began to reflect on all the other places I would like to see leadership. My musings brought me straight to a message I try to impart to clients and professional colleagues struggling with issues of leveraging knowledge and technology.
True leadership is very hard because it requires thinking, projecting and anticipating. It requires abstract thinking about possibilities for making improvements in complex areas. It requires the ability to mentally juggle huge numbers of variables, many of which the true leader knows he/she can’t possibly control but may be able to foresee as possible complications. It requires bucking the status quo.
Anyone can react, and many can react with reasonably appropriate actions, actions that work for the immediate crisis. However, sizing up an enterprise in which things are running in a seemingly routine fashion, and taking the initiative to systematically seek out lurking crises, potential problems, and areas for improvement, and then applying thoughtful and incremental change activities to ensure better outcomes may seem boring – but this is true leadership.
Finally, think about all the ways in which our political leaders seem to thrive on talking about only the monumental crises of the country and world. Think about how our news is driven by immediate crises. We seem to be conditioned to only react to what we are being shown and told. True leaders are seekers, self-educators, investigators, learners and thinkers. Our best leaders are those who get to the core of our political and business enterprises and find a better way for the whole to work more smoothly, with an ultimate goal of bringing positive good to the members of the community. They succeed though personal diligence, finding the will to persevere while immersing themselves in the mundane and routine operations of their domains. They observe and they think about what they observe; they also talk to others and reflect mindfully on what they hear before acting.
As I prepare my opening remarks for several sessions on enterprise search and semantic technologies at the conference over the next three days, I am pondering how I can stimulate the audience to take the time to open their minds to think about what speakers and exhibitors introduce. I want them to think, really think, about what they are hearing. I want them to develop new ideas, new ways of innovating, new ways to make the mundane better and take it back to their enterprises with a purpose – not just with information to be used in the event of a direct work challenge, demand or crisis. I want to lead others to lead from a thoughtfully critical point of view. So, take a look at technologies from the perspective of action toward systemic improvements instead of a reaction to solving only the latest crisis in your enterprise.

© 2019 Bluebill Advisors

Theme by Anders NorenUp ↑