Archive for Search Problems/Solved Search Problems

Leveraging Search in Small Enterprises

A mantra for a small firm or start-up in the 1970s when “Big Blue” was the standard for top notch sales and selling was we need to out-IBM the IBMers.

Search is just one aspect of being able to find what you need to leverage knowledge assets in your work, whether you are in a small firm, a part of a small group in a large organization or an individual consultant seeking to maximize the masses of content and information surrounding you in work.

My thoughts are inspired by the question asked by Andreas Gruber of Informations und Wissensmanagement in this recent post on Enterprise Search Engine Professionals, LinkedIn group. He posed a request for information stating: For enterprise search solutions for (very) small enterprises (10 to 200 employees), I find it hard to define success factors and it seems, that there are not many examples available. If you follow e.g. the critical success factors from the Martin White’s Enterprise Search book, most of them doesn’t seem to work for a small company – simply because none of them can/will investment in a search team etc.

The upcoming Enterprise Search Europe meeting (May 14-16, 2013) in London is one focus of my attention at present. Since Martin White is the Chairman and principal organizer, Andreas’ comments resonated immediately. Concurrently, I am working on a project for a university department, which probably falls in the category of “small enterprise”. The other relevant project on my desk is a book I am co-authoring on “practical KM” and we certainly aim to appeal to the individual practitioner or groups limited by capital resources. These areas of focus challenge me to respond to Andreas’ comments because I am certain they are top of mind for many and the excellent comments already at the posting show that others have good ideas about the topic, as well.

Intangible capital is particularly significant in many small firms, academia, and for independent consultants, like me. Intensive leveraging of knowledge in the form of expertise, relationships, and processes is imperative in these domains. Intangible capital, as a percent of most businesses currently surpasses tangible capital in value, according to Mary Adams founder of Smarter-Companies. Because intangible capital takes more thought and effort to identify, find or aggregate than hard assets, tools are needed to uncover, discover and pinpoint it.

Let’s take the example of expertise, an indisputable intangible asset of any professional services. For any firm, asking expert staff to put an explicit value on their knowledge, competencies or acumen for tackling the type of problem that you need to have solved may give you a sense of value but you need more. The firm or professional you want to hire must be able to back up its value by providing explicit evidence that they “know their stuff” and can produce. For you, search is a tool to lead you to public or published evidence. For the firm being asked to bid on your work, you want them to be able to produce additional evidence. Top quality firms do put both human and technology search resources to work to service existing projects and clients, and to provide evidence of their qualifications, when being asked to retrieve relevant work or references. Search tools and content management methods are diverse and range from modest to very expensive in scope but no firm can exist for long without technology to support the findability of its intangible capital.

To summarize, there are three principal ways that search pays off in the small-medium business (SMB) sector. Citing a few examples of each they are:

  • Finding expertise (people): potential client engagement principal or team member, answers to questions to fulfill a clients engagement, spurring development or an innovation initiative
  • Retrieving prior work: reuse of know-how in new engagements, discovery of ideas previously tabled, learning, documentation of products and processes, building a proposal, starting point for new work, protecting intellectual property for leverage, when patenting, or participating in mergers and acquisitions.
  • Creating the framework for efficiency: time and speed, reinforcing what you know, supporting PR, communications, knowledge base, portraying the scope of intellectual capital (if you are a target for acquisition), the extent of your partnerships that can expand your ability to deliver, creating new offerings (services) or products.

So, to conclude my comment on Andreas’ posting, I would assert that you can “out-IBM the IBMers” or any other large organization by employing search to leverage your knowledge, people and relationships in smart and efficient ways. Excellent content and search practices can probably reduce your total human overhead because even one or two content and search specialists plus the right technology can deliver significant efficiency in intangible asset utilization.

I hope to see conference attendees who come from that SMB community so we can continue this excellent discussion in London, next month. Ask me about how we “ate our own dog-food” (search tools) when I owned a small software firm in the early 1980s. The overhead was minimal compared to the savings in support headcount.

Enterprise Search Strategies: Cultivating High Value Domains

At the recent Gilbane Boston Conference I was happy to hear many remarks positioning and defining “Big Data” and the variety of comments. Like so much in the marketing sphere of high tech, answers begin with technology vendors but get refined and parsed by analysts and consultants, who need to set clear expectations about the actual problem domain. It’s a good thing that we have humans to do that defining because even the most advanced semantics would be hard pressed to give you a single useful answer.

I heard Sue Feldman of IDC give a pretty good “working definition” of big data at the Enterprise Search Summit in May, 2012. To paraphrase is was:

  • > 100 TB up to petabytes, OR
  • > 60% growth a year of unstructured and unpredictable content, OR
  • Ultra high streaming content

But we then get into debates about differentiating data from unstructured content when using a phrase like “big data” and applying it to unstructured content, which knowledge strategists like me tend to put into a category of packaged information. But never mind, technology solution providers will continue to come up with catchy buzz phrases to codify the problem they are solving, whether it makes semantic sense or not.

What does this have to do with enterprise search? In short, “findability” is an increasingly heavy lift due to the size and number of content repositories. We want to define quality findability as optimal relevance and recall.

A search technology era ago, publishers, libraries, content management solution providers were focused on human curation of non-database content, and applying controlled vocabulary categories derived from decades of human managed terminology lists. Automated search provided highly structured access interfaces to what we now call unstructured content. Once this model was supplanted by full text retrieval, and new content originated in electronic formats, the proportion of human categorized content to un-categorized content ballooned.

Hundreds of models for automatic categorization have been rolled out to try to stay ahead of the electronic onslaught. The ones that succeed do so mostly because of continued human intervention at some point in the process of making content available to be searched. From human invented search algorithms, to terminology structuring and mapping (taxonomies, thesauri, ontologies, grammar rule bases, etc.), to hybrid machine-human indexing processes, institutions seek ways to find, extract, and deliver value from mountains of content.

This brings me to a pervasive theme from the conferences I have attended this year, the synergies among text mining, text analytics, extractor/transformer/loader (ETL), and search technologies. These are being sought, employed and applied to specific findability issues in select content domains. It appears that the best results are delivered only when these criteria are first met:

  • The business need is well defined, refined and narrowed to a manageable scope. Narrowing scope of information initiatives is the only way to understand results, and gain real insights into what technologies work and don’t work.
  • The domain of content that has high value content is carefully selected. I have long maintained that a significant issue is the amount of redundant information that we pile up across every repository. By demanding that our search tools crawl and index all of it, we are placing an unrealistic burden on search technologies to rank relevance and importance.
  • Apply pre-processing solutions such as text-mining and text analytics to ferret out primary source content and eliminate re-packaged variations that lack added value.
  • Apply pre-processing solutions such as ETL with text mining to assist with content enhancement, by applying consistent metadata that does not have a high semantic threshold but will suffice to answer a large percentage of non-topical inquiries. An example would be to find the “paper” that “Jerry Howe” presented to the “AMA” last year.

Business managers together with IT need to focus on eliminating redundancy by utilizing automation tools to enhance unique and high-value content with consistent metadata, thus creating solutions for special audiences needing information to solve specific business problems. By doing this we save the searcher the most time, while delivering the best answers to make the right business decisions and innovative advances. We need to stop thinking of enterprise search as a “big data,” single engine effort and instead parse it into “right data” solutions for each need.

Helping Enterprise Searchers Succeed

I begin 2012 with a new perspective on enterprise search, one gained as purely an observer. The venues have all been medical establishments with multiple levels of complexity and healthcare workers. As the primary caregiver for a patient, and with some medical training, I take my role as observer and patient advocate quite seriously.

As soon as the patient was on the way to the emergency room, all of his medical records, insurance cards, medications, and contact information were assembled and brought to the hospital. With numerous critical care professionals intervening, and the patient being taken for various tests over several hours, I verbally imparted information I thought was important that might not yet show up in the system. Toward the end of the emergency phase, after being told several times that they had all his records available and “in the system” I relaxed to focus on the “next steps.”

Numerous specialists were involved in the medical conditions and the first three days passed without “a crisis” but little did we know that medication choices were beginning to cause some major problems. Apparently, some parts of the patient’s medical history were not fully considered, and once the medications caused adverse outcomes, all kinds of other problem arose.

Fortunately, I was there to verbally share knowledge that was in the patient’s medical records and get choices of medicine reversed. On several occasions, doctor’s care orders had been “overlooked” and complicating interventions were executed because the healthcare person “in the moment” took an action without “seeing” those orders. I personally watched the extensive recording of doctor’s decisions and confirmed with them changes that were being made to the patient’s care, but repeatedly had to ask why a change was not being implemented.

Observing for six to eight hours on several care floors, I can only say that time is the enemy for medical staff. When questions were raised, the answers were in the system; in other words, “search worked.” What was not available to staff was time to study the whole patient record and understand overlapping and sometimes conflicting orders about care.

It is shortsighted for any institution to believe that it can squeeze professionals to “think-fast,” “on-their-feet” for hours on end with no time to consider the massive amounts of searchable results they are able to assemble. Human beings should not be expected to sacrifice their professional integrity and work standards because their employers have put them in a constant time bind.

My family member had me, but what of patients with no one, or no one versed in medical conditions and processes to intervene. This extends to every line of business where risk is involved from the practice of law to engineering, manufacturing, design, research and development, testing, technical documentation writing, etc.

I don’t minimize how hard it is for businesses and professional services to stay profitable and competitive when they are being pressed to leverage technology for information resource management. However, one measure that every enterprise must embrace is educating its workforce about the use of information technologies it employs. It is not enough to simply make a search engine interface accessible on the workstation. Every worker must be shown how to search for accurate information, authoritative information, and complete information, and be made aware of the ways to ingest and evaluate what they are finding. Finally, they must be given an alternative to getting a more complete chronicle when the results don’t match the need, even if that alternative is to seek another human being instead of a technology.

Search experts are a professionally trained class of workers who can fill the role of trainers, particularly if they have subject matter expertise in the field where search is being deployed. The risks to any enterprise of short-changing workers by not allowing them to fully exploit and understand results produced from search are long-term, but serious.

It is important to leave this entry with recognition that, due to wonderful healthcare professionals and support staff, the outcomes for the patient have been positive. People listened when I had information to share and respected my role in the process. That in no way absolves institutions and enterprises from giving their employees the autonomy and time to pay attention to all the information flooding their sphere of operation. In every field of endeavor, human beings need the time and environment to mindfully absorb, analyze and evaluate all the content available. Technology can aid but cannot carry out thoughtful professional practice.

Notice: Continue to follow me at new site http://bluebillinc.com/author/lynda-moulton/ coming soon.

Why is it so Hard to “Get” Semantics Inside the Enterprise?

Semantic Software Technologies: Landscape of High Value Applications for the Enterprise was published just over a year ago. Since then the marketplace has been increasingly active; new products emerge and discussion about what semantics might mean for the enterprise is constant. One thing that continues to strike me is the difficulty of explaining the meaning of, applications for, and context of semantic technologies.

Browsing through the topics in this excellent blog site, http://semanticweb.com , it struck me as the proverbial case of the blind men describing an elephant. A blog, any blog, is linear. While there are tools to give a blog dimension by clustering topics or presenting related information, it is difficult to understand the full relationships of any one blog post to another. Without a photographic memory, an individual does not easily connect ideas across a multi-year domain of blog entries. Semantic technologies can facilitate that process.

Those who embrace some concept of semantics are believers that search will benefit from “semantic technologies.” What is less clear is how evangelists, developers, searchers and the average technology user can coalesce around the applications that will semantically enable enterprise search.

On the Internet content that successfully drives interest, sales, opinion and individual promotion does so through a combination of expert crafting of metadata, search engine technology that “understands” the language of the inquirer and the content that can satisfy the inquiry. Good answers are reached when questions are understood first and then the right content is selected to meet expectations.

In the enterprise, the same care must be given to metadata, search engine “meaning” analysis tools and query interpretation for successful outcomes. Magic does not happen without people behind the scenes to meet these three criteria executing linguistic curation, content enhancement and computational linguistic programming.

Three recent meeting events illustrate various states of semantic development and adoption, even as the next conference, Semantic Tech & Business Conference – Washington, D.C. on November 29 – is upon us:

Event 1 – A relatively new group, the IKS-Community funded by the EU has been supporting open source software developers since 2009. In July they held a workshop in Paris just past the mid-point of their life cycle. Attendees were primarily entrepreneurs and independent open source developers seeking pathways for their semantically “tuned” content management solutions. I was asked to suggest where opportunities and needs exist in US markets. They were an enthusiastic audience and are poised to meet the tough market realities of packaging highly sophisticated software for audiences that will rarely understand how complex the stuff “under the hood” really is. My principal charge to them was to create tools that “make it really easy” to work with vocabulary management and content metadata capture, updates, and enhancements.

Event 2. – On this side of the pond, UK firm Linguamatics hosted its user group meeting in Boston in October. Having interviewed a number of their customers last year to better understand their I2E product line, I was happy to meet people I had spoken with and see the enthusiasm of a user community vested in such complex technology. Most impressive is the respectful tone and thoughtful sharing between Linguamatics principals and their customers. They share the knowledge of how hard it is to continually improve search technology that delivers answers to semantically complex questions using highly specialized language. Content contributors and inquirers are all highly educated specialists seeking answers to questions that have never been asked before. Think about it, search engines designed to deliver results for frequently asked questions or to find content on popular topics is hard enough, but finding the answer to a brand new question is a quantum leap of difficulty in comparison.

To make matters even more complicated, answers to semantic (natural language) questions may be found in internal content, in published licensed content or some combination of both. In the latter case, only the seeker may be able to put the two together to derive or infer an answer.

Publishers of content for licensing play a convoluted game of how they will license their content to enterprises for semantic indexing in combination with internal content. The Linguamatics user community is primarily in life sciences; this is one more hurdle for them to overcome to effectively leverage the vast published repositories of biological and medical literature. Rigorous pricing may be good business strategy, but research using semantic search could make more headway with more reasonable royalties that reflect the need for collaborative use across teams and partners.

Content wants to be found and knowledge requires outlets to enable innovation to flourish. In too many cases technology is impaired by lack of business resources by buyers or arcane pricing models of sellers that hold vital information captive for a well-funded few. Semantically excellent retrieval depends on an engine’s indexing access to all contextually relevant content.

Event 3. – Leslie Owens of Forrester Research, at the Fall 2011 Enterprise Search Summit conducted a very interesting interactive session that further affirms the elephant and blind men metaphor. Leslie is a champion of metadata best practices and writes about the competencies and expertise needed to make valuable content accessible. She engaged the audience with a series of questions about its wants, needs, beliefs and plans for semantic technologies. As described in an earlier paragraph about how well semantics serves us on the Web, most of the audience puts its faith in that model but is doubtful of how or when similar benefits will accrue to enterprise search. Leslie and a couple of others made the point that a lot more work has to be done on the back-end on content in the enterprise to get these high-value outcomes.

We’ll keep making the point until more adopters of semantic technologies get serious and pay attention to content, content enhancement, expert vocabulary management and metadata. If it is automatic understanding of your content that you are seeking, the vocabulary you need is one that you build out and enhance for your enterprise’s relevance. Semantic tools need to know the special language you use to give the answers you need.

Classifying Searchers – What Really Counts?

I continue to be impressed by the new ways in which enterprise search companies differentiate and package their software for specialized uses. This is a good thing because it underscores their understanding of different search audiences. Just as important is recognition that search happens in a context, for example:

  • Personal interest (enlightenment or entertainment)
  • Product selection (evaluations by independent analysts vs. direct purchasing information)
  • Work enhancement (finding data or learning a new system, process or product)
  • High-level professional activities (e-discovery to strategic planning)

Vendors understand that there is a limited market for a product or suite of products that will satisfy every budget, search context and the enterprise’s hierarchy of search requirements. Those who are the best focus on the technological strengths of their search tools to deliver products packaged for a niche in which they can excel.

However, for any market niche excellence begins with six basics:

  • Customer relationship cultivation, including good listening
  • Professional customer support and services
  • Ease of system installation, implementation, tuning and administration
  • Out-of-the box integration with complementary technologies that will improve search
  • Simple pricing for licensing and support packages
  • Ease of doing business, contracting and licensing, deliveries and upgrades

While any mature and worthy company will have continually improved on these attributes, there are contextual differentiators that you should seek in your vertical market:

  • Vendor subject matter expertise
  • Vendor industry expertise
  • Vendor knowledge of how professional specialists perform their work functions
  • Vendor understanding of retrieval and content types that contribute the highest value

At a recent client discussion the application of a highly specialized taxonomy was the topic. Their target content will be made available on a public facing web site and also to internal staff. We began by discussing the various categories of terminology already extracted from a pre-existing system.

As we differentiated how internal staff needed to access content for research purposes and how the public is expected to search, patterns emerged for how differently content needs to be packaged for each constituency. For you who have specialized collections to be used by highly diverse audiences, this is no surprise. Before proceeding with decisions about term curation and determining the granularity of their metadata vocabulary, what has become a high priority is how the search mechanisms will work for different audiences.

For this institution, internal users must have pinpoint precision in retrieval on multiple facets of content to get to exactly the right record. They will be coming to search with knowledge of the collection and more certainty about what they can expect to find. They will also want to find their target(s) quickly. On the other hand, the public facing audience needs to be guided in a way that leads them on a path of discovery, navigating through a map of terms that takes them from their “key term” query through related possibilities without demanding arcane Boolean operations or lengthy explanations for advanced searching.

There is a clear lesson here for seeking enterprise search solutions. Systems that favor one audience over another will always be problematic. Therefore, establishing who needs what and how each goes about searching needs to be answered, and then matched to the product that can provide for all target groups.

We are in the season for conferences; there are a few next month that will be featuring various search and content technologies. After many years of walking exhibit halls and formulating strategies for systematic research and avoiding a swamp of technology overload, I try now to have specific questions formulated that will discover the “must have” functions and features for any particular client requirement. If you do the same, describing a search user scenario to each candidate vendor, you can then proceed to ask: Is this a search problem your product will handle? What other technologies (e.g. CMS, vocabulary management) need to be in place to ensure quality search results? Can you demonstrate something similar? What would you estimate the implementation schedule to look like? What integration services are recommended?

These are starting points for a discussion and will enable you to begin to know whether this vendor meets the fundamental criteria laid out earlier in this post. It will also give you a sense of whether the vendor views all searchers and their searches as generic equivalents or knows that different functions and features are needed for special groups.

Look for vendors for enterprise search and search related technologies to interview at the following upcoming meetings:

Enterprise Search Summit, New York, May 10 – 11 [...where you will learn strategies and build the skill sets you need to make your organization's content not only searchable but "findable" and actionable so that it delivers value to the bottom line.] This is the largest seasonal conference dedicated to enterprise search. The sessions are preceded by separate workshops with in-depth tutorials related to search. During the conference, focus on case studies of enterprises similar to yours for better understanding of issues, which you may need to address.

Text Analytics Summit, Boston, May 18 – 19 I spoke with Seth Grimes, who kicks off the meeting with a keynote, asking whether he sees a change in emphasis this year from straight text mining and text analytics. You’ll have to attend to get his full speech but Seth shared that he see a newfound recognition that “Big Data” is coming to grips with text source information as an asset that has special requirements (and value). He also noted that unstructured document complexities can benefit from text analytics to create semantic understanding that improves search, and that text analytics products are rising to challenge for providing dynamic semantic analysis, particularly around massive amounts of social textual content.

Lucene Revolution, San Francisco, May 23 – 24 [...hear from ... the foremost experts on open source search technology to a broad cross-section of users that have implemented Lucene, Solr, or LucidWorks Enterprise to improve search application performance, scalability, flexibility, and relevance, while lowering their costs.] I attended this new meeting last year when it was in Boston. For any enterprise considering or leaning toward implementing open source search, particularly Lucene or Solr, this meeting will set you on a path for understanding what that journey entails.

How Far Does Semantic Software Really Go?

A discussion that began with a graduate scholar at George Washington University in November, 2010 about semantic software technologies prompted him to follow up with some questions for clarification from me. With his permission, I am sharing three questions from Evan Faber and the gist of my comments to him. At the heart of the conversation we all need to keep having is, how far does this technology go and does it really bring us any gains in retrieving information?

1. Have AI or semantic software demonstrated any capability to ask new and interesting questions about the relationships among information that they process?

In several recent presentations and the Gilbane Group study on Semantic Software Technologies, I share a simple diagram of the nominal setup for the relationship of content to search and the semantic core, namely a set of terminology rules or terminology with relationships. Semantic search operates best when it focuses on a topical domain of knowledge. The language that defines that domain may range from simple to complex, broad or narrow, deep or shallow. The language may be applied to the task of semantic search from a taxonomy (usually shallow and simple), a set of language rules (numbering thousands to millions) or from an ontology of concepts to a semantic net with millions of terms and relationships among concepts.

The question Evan asks is a good one with a simple answer, “Not without configuration.” The configuration needs human work in two regions:

  • Management of the linguistic rules or ontology
  • Design of search engine indexing and retrieval mechanisms

When a semantic search engine indexes content for natural language retrieval, it looks to the rules or semantic nets to find concepts that match those in the content. When it finds concepts in the content with no equivalent language in the semantic net, it must find a way to understand where the concepts belong in the ontological framework. This discovery process for clarification, disambiguation, contextual relevance, perspective, meaning or tone is best accompanied with an interface making it easy for a human curator or editor to update or expand the ontology. A subject matter expert is required for specialized topics. Through a process of automated indexing that both categorizes and exposes problem areas, the semantic engine becomes a search engine and a questioning engine.

The entire process is highly iterative. In a sense, the software is asking the questions: “What is this?”, “How does it relate to the things we already know about?”, “How is the language being used in this context?” and so on.

2. In other words, once they [the software] have established relationships among data, can they use that finding to proceed – without human intervention- to seek new relationships?

Yes, in the manner described for the previous question. It is important to recognize that the original set of rules, ontologies, or semantic nets that are being applied were crafted by human beings with subject matter expertise. It is unrealistic to think that any team of experts would be able to know or anticipate every use of the human language to codify it in advance for total accuracy. The term AI is, for this reason, a misnomer because the algorithms are not thinking; they are only looking up “known-knowns” and applying them. The art of the software is in recognizing when something cannot be discerned or clearly understood; then the concept (in context) is presented for the expert to “teach” the software what to do with the information.

State-of-the-art software will have a back-end process for enabling implementer/administrators to use the results of search (direct commentary from users or indirectly by analyzing search logs) to discover where language has been misunderstood as evidenced by invalid results. Over time, more passes to update linguistic definitions, grammar rules, and concept relationships will continue to refine and improve the accuracy and comprehensiveness of search results.

3. It occurs to me that the key value added of semantic technologies to decision-making is their capacity to link sources by context and meaning, which increases situational awareness and decision space. But can they probe further on their own?

Good point on the value and in a sense, yes, they can. Through extensive algorithmic operations, instructions can be embedded (and probably are for high-value situations like intelligence work), instructing the software what to do with newly discovered concepts. Instructions might then place these new discoveries into categories of relevance, importance, or associations. It would not be unreasonable to then pass documents with confounding information off to other semantic tools for further examination. Again, without human analysis along the continuum and at the end point, no certainty about the validity of the software’s decision-making can be asserted.

I can hypothesize a case in which a corpus of content contains random documents in foreign languages. From my research, I know that some of the semantic packages have semantic nets in multiple languages. If the corpus contains material in English, French, German and Arabic, these materials might be sorted and routed off to four different software applications. Each batch would be subject to further linguistic analysis, followed by indexing with some middleware applied to the returned results for normalization, and final consolidation into a unified index. Does this exist in the real world now? Probably there are variants but it would take more research to find the cases, and they may be subject to restrictions that would require the correct clearances.

Discussions with experts who have actually employed enterprise specific semantic software, underscores the need for subject expertise, and some computational linguistics training coupled with an aptitude for creative inquiry. These scientists informed me that individuals, who are highly multi-disciplinary and facile with electronic games and tools, did the best job of interacting with the software and getting excellent results. Tuning and configuration over time by the right human players is still a fundamental requirement.

Enterprise Trends: Contrarians and Other Wise Forecasters

The gradual upturn from the worst economic conditions in decades is reason for hope. A growing economy coupled with continued adoption of enterprise software, in spite of the tough economic climate, keep me tuned to what is transpiring in this industry. Rather than being cajoled into believing that “search” has become commodity software, which it hasn’t, I want to comment on the wisdom of Jill Dyché and her Anti-predictions for 2011 in a recent Information Management Blog. There are important lessons here for enterprise search professionals, whether you have already implemented or plan to soon.

Taking her points out of order, I offer a bit of commentary on those that have a direct relationship to enterprise search. Based on past experience, Ms. Dyché predicts some negative outcomes but with a clear challenge for readers to prove her wrong. As noted, enterprise search offers some solutions to meet the challenges:

  1. No one will be willing to shine a bright light on the fact that the data on their enterprise data warehouse isn’t integrated. It isn’t just the data warehouse that lacks integration among assets, but among all applications housing critical structured and unstructured content. This does not have to be the case. Several state-of-the-art enterprise search products that are not tied to a specific platform or suite of products do a fine job of federating indexing of disparate content repositories. In a matter of weeks or few months, a search solution can be deployed to crawl, index and search multiple sources of content. Furthermore, newer search applications are being offered for pre-purchase testing for out-of-the-box suitability in pilot or proof-of-concept (POC) projects. Organizations that are serious about integrating content silos have no excuse for not taking advantage of easier to deploy search products.
  2. Even if they are presented with proof of value, management will be reluctant to invest in data governance. Combat this entrenched bias with a strategy to overcome lack of governance; a cost cutting argument is unlikely to change minds. However, risk is an argument that will resonate, particularly when bolstered with examples. Include instances when customers were lost due to poor performance or failure to deliver adequate support services, sales were lost because answers to qualifying questions could not be answered or were not timely, legal or contract issues could not be defended due to inaccessibility of critical supporting documents, or when maintenance revenue was lost due to incomplete, inaccurate or late renewal information getting out to clients. One simple example is the consequences of not sustaining a concordance of customer name, contact, and address changes. The inability of content repositories to talk to each other or aggregate related information in a search because a Customer labeled as Marion University at one address is the same as the Customer labeled University of Marion at another address will be embarrassing in communications and, even worse, costly. Governance of processes like naming conventions and standardized labeling enhances the value and performance of every enterprise system including search.
  3. Executives won’t approve new master data management or business intelligence funding without an ROI analysis. This ties in with the first item because many enterprise search applications include excellent tools for performing business intelligence, analytics, and advanced functions to track and evaluate content resource use. The latter is an excellent way to understand who is searching, for what types of data, and the language used to search. These supporting functions are being built into applications for enterprise search and do not add additional cost to product licenses or implementation. Look for enterprise search applications that are delivered with tools that can be employed on an ad hoc basis by any business manager.
  4. Developers won’t track their time in any meaningful way. This is probably true because many managers are poorly equipped to evaluate what goes into software development. However, in this era of adoption of open source, particularly for enterprise search, organizations that commit to using Lucene or Solr (open source search) must be clear on the cost of building these tools into functioning systems for their specialized purposes. Whether development will be done internally or by a third party, it is essential to place strong boundaries around each project and deployment, with specifications that stage development, milestones and change orders. “Free” open source software is not free or even cost effective when an open meter for “time and materials” exists.
  5. Companies that don’t characteristically invest in IT infrastructure won’t change any time soon. So, the silo-ed projects will beget more silo-ed data…Because the adoption rate for new content management applications is so high, and the ease for deploying them encourages replication like rabbits, it is probably futile to try to staunch their proliferation. This is an important area for governance to be employed, to detect redundancy, perform analytics across silos, and call attention to obvious waste and duplication of content and effort. Newer search applications that can crawl and index a multitude of formats and repositories will easily support efforts to monitor and evaluate what is being discovered in search results. Given a little encouragement to report redundancy and replicated content, every user becomes a governor over waste. Play on the natural inclination for people to complain when they feel overwhelmed by messy search results, by setting up a simple (click a button) reporting mechanism to automatically issue a report or set a flag in a log file when a search reveals a problem.

It is time to stop treating enterprise search like a failed experiment and instead, leverage it to address some long-standing technology elephants roaming around our enterprises.

To follow other search trends for the coming year, you may want to attend a forthcoming webinar, 11 Trends in Enterprise Search for 2011, which I will be moderating on January 25th. These two blogs also have interesting perspectives on what is in store for enterprise applications: CSI Info-Mgmt: Profiling Predictors 2011, by Jim Ericson and The Hottest BPM Trends You Must Embrace In 2011!, by Clay Richardson. Also, some of Ms. Dyché’s commentary aligns nicely with “best practices” offered in this recent beacon, Establishing a Successful Enterprise Search Program: Five Best Practices

Data Mining for Energy Independence

Mining content for facts and information relationships is a focal point of many semantic technologies. Among the text analytics tools are those for mining content in order to process it for further analysis and understanding, and indexing for semantic search. This will move enterprise search to a new level of research possibilities.
Research for a forthcoming Gilbane report on semantic software technologies turned up numerous applications used in the life sciences and publishing. Neither semantic technologies nor text mining are mentioned in this recent article Rare Sharing of Data Leads to Progress on Alzheimer’s in the New York Times but I am pretty certain that these technologies had some role in enabling scientists to discover new data relationships and synthesize new ideas about Alzheimer’s biomarkers. The sheer volume of data from all the referenced data sources demands computational methods to distill and analyze.
One vertical industry poised for potential growth of semantic technologies is the energy field. It is a special interest of mine because it is a topical area in which I worked as a subject indexer and searcher early in my career. Beginning with the 1st energy crisis, oil embargo of the mid-1970s, I worked in research organizations that involved both fossil fuel exploration and production, and alternative energy development.
A hallmark of technical exploratory and discovery work is the time gaps between breakthroughs; there are often significant plateaus between major developments. This happens if research reaches a point that an enabling technology is not available or commercially viable to move to the next milestone of development. I observed that the starting point in the quest for innovative energy technologies often began with decades-old research that stopped before commercialization.
Building on what we have already discovered, invented or learned is one key to success for many “new” breakthroughs. Looking at old research from a new perspective to lower costs or improve efficiency for such things as photovoltaic materials or electrochemical cells (batteries) is what excellent companies do.
How does this relate to semantic software technologies and data mining? We need to begin with content that was generated by research in the last century; much of this is just now being made electronic. Even so, most of the conversion from paper, or micro formats like fîche, is to image formats. In order to make the full transition to enable data mining, content must be further enhanced through optical character recognition (OCR). This will put it into a form that can be semantically parsed, analyzed and explored for facts and new relationships among data elements.
Processing of old materials is neither easy nor inexpensive. There are government agencies, consortia, associations, and partnerships of various types of institutions that often serve as a springboard for making legacy knowledge assets electronically available. A great first step would be having DOE and some energy industry leaders collaborating on this activity.
A future of potential man-made disasters, even when knowledge exists to prevent them, is not a foregone conclusion. Intellectually, we know that energy independence is prudent, economically and socially mandatory for all types of stability. We have decades of information and knowledge assets in energy related fields (e.g. chemistry, materials science, geology, and engineering) that semantic technologies can leverage to move us toward a future of energy independence. Finding nuggets of old information in unexpected relationships to content from previously disconnected sources is a role for semantic search that can stimulate new ideas and technical research.
A beginning is a serious program of content conversion capped off with use of semantic search tools to aid the process of discovery and development. It is high time to put our knowledge to work with state-of-the-art semantic software tools and by committing human and collaborative resources to the effort. Coupling our knowledge assets of the past with the ingenuity of the present we can achieve energy advances using semantic technologies already embraced by the life sciences.

Where and How Can You Look for Good Enterprise Search Interface Design?

Designing an enterprise search interface that employees will use on their intranet is challenging in any circumstance. But starting from nothing more than verbal comments or even a written specification is really hard. However, conversations about what is needed and wanted are informative because they can be aggregated to form the basis for the overarching design.

Frequently, enterprise stakeholders will reference a commercial web site they like or even search tools within social sites. These are a great starting point for a designer to explore. It makes a lot of sense to visit scores of sites that are publicly accessible or sites where you have an account and navigate around to see how they handle various design elements.

To start, look at:

  • How easy is it to find a search box?
  • Is there an option to do advanced searches (Boolean or parametric searching)?
  • Is there a navigation option to traverse a taxonomy of terms?
  • Is there a "help" option with relevant examples for doing different kinds of searches?
  • What happens when you search for a word that has several spellings or synonyms, a phrase (with or without quotes), a phrase with the word and in it, a numeral, or a date?
  • How are results displayed: what information is included, what is the order of the results and can you change them? Can you manipulate results or search within the set?
  • Is the interface uncluttered and easily understood?

The point of this list of questions is that you can use it to build a set of criteria for designing what your enterprise will use and adopt, enthusiastically. But this is only a beginning. By actually visiting many sites outside your enterprise, you will find features that you never thought to include or aggravations that you will surely want to avoid. From these experiences on external sites, you can build up a good list of what is important to include or banish from your design.

When you find sites that you think are exemplary, ask key stakeholders to visit them and give you their feedback, preferences and dislikes. Particularly, you want to note what confuses them or enthusiastic comments about what excites them.

This post originated because several press notices in the past month brought to my attention Web applications that have sophisticated and very specialized search applications. I think they can provide terrific ideas for the enterprise search design team and also be used to demonstrate to your internal users just what is possible.

Check out these applications and articles: on KNovel, particularly this KNovel pageThomasNet; EBSCOHost mentioned in this article about the "deep Web.". All these applications reveal superior search capabilities, have long track records, and are already used by enterprises every day. Because they are already successful in the enterprise, some by subscription, they are worth a second look as examples of how to approach your enterprise’s search interface design.